aha symposium 2025
Can we design AI to support human flourishing? | AHA Symposium 2025
advancing humans with ai (aha).. mit aha speakers on ai..
[https://www.media.mit.edu/events/aha-symposium/]:
AHA Symposium 2025: Can we design AI to support human flourishing?
April 10, 2025 | 8:30am— 5:30pm
The AHA program is excited to announce its inaugural symposium with a goal of discussing what is arguably one of the most important questions of our time: What future with AI do we want to live in and how can we design and deploy AI that improves the human experience?
The recording is available here (6.5 hr)
people speaking include: Pattie Maes, Professor at MIT Media Lab, Co-director of AHA, Tristan Harris, Executive director and Co-founder of the Center for Humane Technology, Arianna Huffington, Co-founder of HuffPost, Founder and CEO of Thrive Global, Douglas Rushkoff, Author of Team Human, Professor at the City University of New York, Pat Pataranutaporn, Co-director of AHA, MIT Media Lab, Rosalind Picard, Professor at MIT Media Lab, Director of Affective Computing Research, Sherry Turkle, Professor of the Social Studies of Science and Technology, MIT, Sandhini Agarwal, Member of Technical Staff, Trustworthy AI team lead, OpenAI, Deb Roy, Professor at MIT Media Lab, Director of MIT Center for Constructive Communication , Jonnie Penn, Professor of AI Ethics and Society, University of Cambridge, David Rand, Professor of Management Science and Brain and Cognitive Sciences at MIT, Mor Naaman, Professor of information science at Cornell Tech, Andrew Lippman, Senior research scientist, Associate Director of MIT Media Lab, Brendan McCord, Founder & Chair, Cosmos Institute, Isabella Loaiza, Postdoctoral Researcher at the MIT Sloan Schoolof Management, Jaime Teevan, Chief Scientist and Technical Fellow at Microsoft, Pattie Maes, Mitchel Resnick, Professor at MIT Media Lab, Director of Lifelong Kindergarten group, Howard Gardner, Professor at School of Education, Harvard University, Pat Yongpradit, Chief Academic Officer at Code.org, Lead of TeachAI, Tod Machover, Professor at MIT Media Lab, Director of Opera of the Future Group, Jaron Lanier, Prime Unifying Scientist, Office of the CTO, Microsoft
pattie maes, tristan harris, arianna huffington, douglas rushkoff, sherry turkle, deb roy, mitch resnick, howard gardner, jaron lanier
Can we design AI to support human flourishing?
A 1-day symposium to launch the AHA research program
AI is here to stay, but how do we ensure that people flourish in a world of pervasive AI use? The MIT Media Lab’s Advancing Humans with AI (AHA) research program is excited to announce its inaugural symposium with a goal of discussing what is arguably one of the most important questions of our time: What future with AI do we want to live in and how can we design and deploy AI that improves the human experience?
In person attendance at the MIT Media Lab is upon invitation only, but the symposium will be streamed online. For more information: contact aha@media.mit.edu.
The AI community is almost solely focused on reaching AGI and optimizing models to make them more accurate, efficient, equitable and safe. But few researchers are asking how we can optimize the design of AI for people? While AI offers great potential to improve the human experience, its pervasive use may simultaneously lead to negative human outcomes such as overreliance and misplaced trust, loss of understanding, agency and skills, misinformation and manipulation, privacy erosion, social isolation, and unhealthy attachments. The symposium will debate what human outcomes we may want to optimize for, issues such as human agency, enlightenment, meaning, independence, understanding, and connectedness, and what approaches and methods may support them.
The inaugural AHA symposium will bring together leading experts and innovators from industry and academia to discuss possible futures with AI that elucidate these questions and help us make informed decisions. The history of social media offers a warning: while originally developed with the aim of strengthening social connections, widespread use has resulted in unanticipated consequences including increased polarization, a loss of truth, increased rates of anxiety and depression, higher loneliness, a loss of privacy and more. As AI rapidly advances and permeates all aspects of our lives, we need to ask: what do people stand to gain or lose when AI is used ubiquitously?
Panels, talks and discussions will dive into AI’s impact along several dimensions of human existence including:
Interior life, examining personal growth and emotional wellbeing,
Social life, examining interpersonal connections and social information networks,
Vocational life, examining professional fulfillment and the future experience of work,
Cerebral life, examining learning, creativity and intellectual growth, and
Creative life, examining how we express ourselves and create meaningful experiences.
to me.. just need first two (authenticity and attachment).. ie: org around two legit needs.. other three are cancerous distractions
In between panel sessions, Media Lab researchers will present thought provoking demonstrations and sneak peeks into relevant research.
notes/quotes from 6.5 hr vimeo embedded on site:
53 min – tristan: engagement was the default setting.. how do we see in terms of human vulnerabilities..
55 min – tristan: not just individual choice..
need to try spaces of permission where people have nothing to prove to facil curiosity over decision making.. because the finite set of choices of decision making is unmooring us.. keeping us from us..
ie: imagine if we listen to the itch-in-8b-souls 1st thing everyday & use that data to connect us (tech as it could be.. ai as augmenting interconnectedness)
the thing we’ve not yet tried/seen: the unconditional part of left to own devices ness
[‘in an undisturbed ecosystem ..the individual left to its own devices.. serves the whole’ –dana meadows]
44 min – ariana: food, sleep, movement, stress management and connection.. when healthier.. will make wiser choices.. less likely to engage in unhealthy behavior.. gps for the soul
we need a means to facil curiosity over decision making.. because the finite set of choices of decision making is unmooring us.. keeping us from us..
45 min – ariana @ariannahuff: best thing about this gps for the soul is that it doesn’t judge us.. t
huge huge huge.. the thing we’ve not yet tried/seen: the unconditional part of left to own devices ness
[‘in an undisturbed ecosystem ..the individual left to its own devices.. serves the whole’ –dana meadows]
there’s a legit use of tech (nonjudgmental exponential labeling) to facil the seeming chaos of a global detox leap/dance.. for (blank)’s sake..
ie: whatever for a year.. a legit sabbatical ish transition
otherwise we’ll keep perpetuating the same song.. the whac-a-mole-ing ness of sea world.. of not-us ness
46 min – ariana: .. assuming humans are static.. i don’t believe that.. my ambition is everyday to be diff.. closer to my essential nature.. that’s what excites me more than anything.. what i love about gps not judging us .. a bit like gps in car.. and why so many are drawn to chatbots.. because they have been judged so much and being judged is one of ways that we profoundly avoid changing
need the thing tech can do but we can’t.. ie: tech w/o judgment (nonjudgmental exponential labeling)
47 min – ariana: to the doomsayers of ai.. that it is going to hack civilization.. that’s already happened.. the operating system of civ is hacked..
since forever.. black science of people/whales law et al
56 min – tristan: all being shaped for intention max incentive.. we should change those incentives to new ones and new business models.. we think of tech as progress.. but we have a history at getting this wrong
if incentive ness.. already cancerous distraction
1:09 – ariana: we have a category error.. based on a false assumption of what it is to be human.. t
again.. since forever.. black science of people/whales law et al
______
_____
_____
_____
____
____
______
_____


