Artificial intelligence has the potential to make us better people, providing expert-sourced guidance to help us through difficult conversations. But the technology can also generate suspicion, leading people to question the authenticity of these interactions. An electric discussion last week, featuring Jonathan Zittrain and Carissa Véliz, explored the impact of generative AI on human relationships. Navigating the various ways chatbots fill spaces between people, panelists considered AI in medicine, whether the platforms can be called empathetic, and their effectiveness in coaching and counseling roles. Read the full event write up here: https://lnkd.in/eWT7dPDa
Edmond & Lily Safra Center for Ethics
Higher Education
Cambridge, Massachusetts 1,335 followers
Leading research in practical ethics since 1987.
About us
Our mission is to strengthen teaching and research about pressing ethical issues; to foster sound norms of ethical reasoning and civic discussion; and to share the work of our community in the public interest. We offer a range of annual research fellowships for undergraduates, graduates, post graduates, and professionals, and run two large-scale research initiatives. Learn more on our website!
- Website
-
https://ethics.harvard.edu
External link for Edmond & Lily Safra Center for Ethics
- Industry
- Higher Education
- Company size
- 51-200 employees
- Headquarters
- Cambridge, Massachusetts
- Type
- Nonprofit
- Founded
- 1987
- Specialties
- ethics, research, writing, teaching, civic education, policy, scholarship, practical ethics, political theory, philosophy, and tech ethics
Locations
-
Primary
Get directions
124 Mt. Auburn Street
520N
Cambridge, Massachusetts 02138, US
Employees at Edmond & Lily Safra Center for Ethics
Updates
-
Stay tuned for details about the winning projects from our successful instructional design hackathon today!
We had an exciting and productive day at Edmond & Lily Safra Center for Ethics at the first *ever* ethics pedagogy hackathon for making games & lesson plans for practicing ethical reasoning and skills for engaging in civic discourse. I was floored by the enthusiastism and creativity with which people tackled the challenge and the phenomenally promising products they created in only six hours! Among the winning team and runner up there were team members who didn’t meet before today and yet meshed brilliantly and had fun while creating exciting products. Special thanks to our judges - Marta McAlister, Eric Beerbohm and Eliza O'Neil who volunteered their time to help pick the winners (whose projects we will share here soon) and a special shoutout for Maxine Gill for envisioning the hackathon and making it a reality and a success in a short period of time.
-
-
We are delighted to welcome our 2026-2027 Fellows-in-Residence and Civil Discourse Fellows! This group brings together exemplary scholars across philosophy, law, political science, and public policy to examine how communication, institutions, and technology shape social and political life. Read about each of them here: https://lnkd.in/egSwVT3E
-
DEADLINE EXTENDED TO MAY 15! Do you want to engage in civil dialogues with students across the country? Are you eager to hear from people with different perspectives from your own? If so, the Intercollegiate Civil Disagreement Partnership Fellowship might be for you! The Edmond & Lily Safra Center for Ethics, in collaboration with five other institutions, is offering a year-long hybrid experience, beginning in the summer, that develops students' abilities to engage in and lead conversations about difficult, important topics across political difference at their respective universities and beyond. Harvard University undergraduates from all fields of study and political identities are encouraged to apply. Learn more about the program and apply here: https://lnkd.in/ePVEP5cB
-
Edmond & Lily Safra Center for Ethics reposted this
What a wonderful experience to be in Harvard, hosted by the Edmond & Lily Safra Center for Ethics (thank you!)! It was amazing to catch up with Sushma R. and to exchange #books with the incredible Dr. Joy Buolamwini! One of the perks of academic life - meeting incredible people. #Prophecy
-
-
Join us this Monday, April 27 at 4:30pm EST to hear Carissa Véliz in person for a panel discussion with Jonathan Zittrain as they explore AI and human relationships. Free and open to the public, register below. Can't make it to Cambridge? Tune into the livestream on YouTube: https://lnkd.in/ewZr_7kg
WOW! Carissa Véliz delivered this year one the most astute, original and insightful TED talks I have heard in a very long time. I never saw technologists who are making bold PREDICTIONS as sinister actors who are exercising POWER over others — and the future. Listen to this talk. A star is born! Veliz is a new kind of public figure — a prophet against prophecy. https://lnkd.in/etRBAGsU
Carissa Véliz: Beware the power of prediction
https://www.ted.com
-
Calling all undergraduates, graduate students and young scholars from any discipline in Boston area colleges and universities! Sign up to design a lesson plan or game that helps students engage with difficult, contested, or uncomfortable conversations. This full day instructional design "hackathon" will take place on May 1, 2026 in Cambridge, MA. Projects will be evaluated by our panel of judges, including Marta McAlister, Director of Gemini Education at Google; Jake Fay, Director of Education at the Constructive Dialogue Institute; and Eric Beerbohm, Alfred and Rebecca Lin Professor of Government Harvard University and the Edmond & Lily Safra Center for Ethics Faculty Director. https://lnkd.in/eWUDk3de
-
-
As generative AI systems increasingly shape how people learn, work, seek companionship, and experience solitude, we ask: what happens to human relationships when technologies can meet us in the very forms of interaction through which those relationships have traditionally been constituted? How might the availability of fluent, responsive, non-human interlocutors reshape the value we place on the effort, vulnerability, and reciprocity required by relationships with other humans—and, in turn, our understanding of the human condition itself? Join Jonathan Zittrain of Harvard University and Carissa Véliz of University of Oxford on Monday, April 27 at 4:30pm EST to explore this new territory. Moderated by Edmond & Lily Safra Center for Ethics Faculty Director Eric Beerbohm. Free and open to the public, register below. Can't make it to Cambridge? Tune into the livestream on YouTube. https://lnkd.in/ewZr_7kg
-
-
Rule number one of Isaac Asimov’s “Three Rules of Robotics” states: A robot may not injure a human being, or through inaction allow a human being to come to harm. A new study by Sarah Hubbard, David Kidd (our Chief Assessment Scientist), and Andrei Stupu, PhD, indicates that this rule may actually guide present day AI models in their ethical decision making. In a new article, “Crocodile Tears: Can the Ethical-Moral Intelligence of AI Models be Trusted?,” published in AI and Ethics, Hubbard et al. detail their testing of four AI models: GPT, DeepSeek, Llama, and Claude, on a range of ethical dilemmas. They compared the AI responses with human responses to determine the ethical-moral intelligence of AI. Their research uncovers some potentially troubling conclusions. Read more: https://lnkd.in/e_ZdtitK
-
-
In her new book, Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI, University of Oxford professor Carissa Véliz explains how putting too much stock in others’ predictions makes us vulnerable to charlatans, con artists, dubious technology, and self-deception. Join us in Cambridge on Monday, April 27 at 12pm EST for an Ethics Exchange event featuring Professor Véliz to learn how AI is more likely to increase risk than decrease it, and several other insights drawn from her research. Register here: https://lnkd.in/eNUkeuKg
-