ron ivey at aha

(second speaker i listened to.. first was petr slovak at aha which intro’d me to mit aha speakers on ai and advancing humans with ai (aha) and aha symposium 2025..)
via tweet @medialab [https://x.com/medialab/status/1963327914260185591]:
At 4:30pm ET on Thursday, September 4, join us for the next installment of the seminar series hosted by @AHA_MediaLab! Our guest speakers will be @ronivey, Jonathan Teubner, Nathanael Fast, and @Ravi Iyer; the discussion will be moderated by Prof. Pattie Maes (@fluidinterfaces). [https://media.mit.edu/events/aha-speaker-series-ron-ivey/]
from link:
Join us for our online seminar series event hosted by MIT Media Lab’s Advancing Humans with AI (AHA) research program. This event features Ron Ivey, Research Fellow at the Harvard Human Flourishing Program, co-leading the Trust and Belonging Initiative. He is joined by his colleagues Jonathan Teubner, Nathanael Fast, and Ravi Iyer. This discussion will be moderated by Dr. Pattie Maes.
Abstract
Artificial intelligence (AI) innovation offers numerous benefits, yet its rapid development also presents significant risks, particularly for children.AI chatbots, powered by Large Language Models (LLMs), are becoming increasingly integrated into daily life, with platforms such as ChatGPT and Character.AI attracting hundreds of millions of users, including minors. While AI chatbots can provide mental health support and enhance communication skills, they also pose serious risks, including social isolation, exposure to child abuse, and even suicide.Given the global decline in youth mental health and the documented impact of technology (e.g., social media) on youth well-being, this issue demands urgent attention. Current AI governance frameworks often overlook the developmental needs and rights of children, failing to ensure that AI technologies foster human flourishing rather than cause harm..t
for that.. need to get to the root of problem.. for all of us.. otherwise just perpetuating survival triage et al
This presentation argues that AI companies have both an opportunity and a responsibility to prioritize child well-being *by designing chatbots that enhance, rather than replace, human relationships..t To address these risks, **we propose creating a Global Task Force on AI and Child Well-Being, led by G20 nations, to develop innovative standards for AI chatbot design and deployment..t
*rather.. ie: imagine if we listen to the itch-in-8b-souls 1st thing everyday & use that data to connect us
tech as it could be.. ai as augmenting interconnectedness
**mufleh humanity law: we have seen advances in every aspect of our lives except our humanity– Luma Mufleh
the thing we’ve not yet tried/seen: the unconditional part of left to own devices ness
[‘in an undisturbed ecosystem ..the individual left to its own devices.. serves the whole’ –dana meadows]
there’s a legit use of tech (nonjudgmental exponential labeling) to facil the seeming chaos of a global detox leap/dance.. for (blank)’s sake..
ie: whatever for a year.. a legit sabbatical ish transition
otherwise we’ll keep perpetuating the same song.. the whac-a-mole-ing ness of sea world.. of not-us ness
This initiative requires a collaborative, multi-sector approach to: 1. Developing an AI design paradigm that promotes children’s social and relational development 2. Codifying this design paradigm into international technical standards and 3. Demonstrating its implementation through independent third-party testing. This presentation presents a roadmap for AI innovation that prioritizes child well-being.
cancerous distractions
Speaker Bio
Ron Ivey is a writer, researcher, and policy advisor with a *focus on social trust, belonging, and human flourishing..t He serves as a Research Fellow at the Harvard Human Flourishing Program, co-leading the Trust and Belonging Initiative, and is also a Fellow at both the Centre for Public Impact and the Global Solutions Initiative—advising the G20 on policy evaluation. He has spent 24 years forging impactful collaborations across industry, government, academia, and nonprofits to drive positive social impact. **In 2024, he launched HumanConnections.AI to ensure artificial intelligence truly enhances human flourishing and strengthens social bonds. In 2017, he founded the Rembrandt Collective to help businesses develop strategies for trust, alignment, and social impact. Ron sits on multiple advisory boards, including the OECD’s Trust in Business Initiative and is a founding officer of Friends of Notre Dame de Paris, which has raised over $50 million for the Cathedral’s restoration. His work has been profiled in Financial Times, Newsweek, American Affairs, and The New Statesman. Recognized for his thought leadership, Ron has been asked to speak at events like SXSW, the House of Beautiful Business, and the European Alpbach Forum.
*nothing to date has gotten to the root of problem
legit freedom/trust/belonging/flourishing will only happen if it’s all of us.. and in order to be all of us.. has to be sans any form of measuring, accounting, people telling other people what to do
ie: imagine if we listen to the itch-in-8b-souls 1st thing everyday & use that data to connect us (tech as it could be.. ai as augmenting interconnectedness)
the thing we’ve not yet tried/seen: the unconditional part of left to own devices ness
[‘in an undisturbed ecosystem ..the individual left to its own devices.. serves the whole’ –dana meadows]
there’s a legit use of tech (nonjudgmental exponential labeling) to facil the seeming chaos of a global detox leap/dance.. for (blank)’s sake..
ie: whatever for a year.. a legit sabbatical ish transition
otherwise we’ll keep perpetuating the same song.. the whac-a-mole-ing ness of sea world.. of not-us ness
Jonathan D. Teubner is a Research Associate at the *Human Flourishing Program, where he leads the AI and Flourishing Initiative. He has published broadly in the field of history of philosophy, theology, and cultural sociology, and is the **author of Charity after Augustine: Solidarity, Conflict, and the Practices of Charity (Oxford University Press, 2024) and Prayer after Augustine: A Study in the Development of the Latin Tradition (Oxford University Press, 2018), the latter of which won the Manfred Lautenschlaeger Award for Theological Promise in 2019. Along with Sarah Coakley and Richard Cross, Teubner is the co-editor of the Oxford Handbook to the Historical Reception of Theology (Oxford University Press, forthcoming 2025). Teubner has held faculty positions at the Australian Catholic University and at the University of Virginia, where he led a collaborative team of data scientists and scholars across the social sciences to create AI tools to predict political and social violence. Teubner’s insights and analysis have appeared in The New York Times, The Economist, and The Hill, he is regularly interviewed by BBC, CNN, Scripps News and NBC Nightly News, and is a contributing editor at The Hedgehog Review. In 2022, he co-founded FilterLabs, a data analytics company that leverages artificial intelligence to source high-quality localized data in hard-to-reach regions of the world.
*mufleh humanity law: we have seen advances in every aspect of our lives except our humanity– Luma Mufleh
**difference between mutual aid and charity.. yet both still forms of people telling other people what to do
Nathanael Fast is an Associate Professor of Management and Organization at the USC Marshall School of Business, Director of the Neely Center for Ethical Leadership and Decision Making, and Co-Director of the Psychology of Technology Institute. He studies the psychological underpinnings of power, leadership, and technology adoption. His research examines how power and status hierarchies shape decision making, how people’s identities shape their professional networks, and how AI is shaping the future. He received his PhD in Organizational Behavior from Stanford University and has been recognized for both teaching and research, including USC’s Golden Apple Teaching Award, the Dean’s Award for Excellence in Research, and Poets & Quants.
how we gather in a space is huge.. need to try spaces of permission where people have nothing to prove to facil curiosity over decision making.. because the finite set of choices of decision making is unmooring us.. keeping us from us..
Ravi Iyer is a technologist and academic psychologist working to improve technology’s impact on society. He is currently the Research Director for the USC Marshall School’s Neely Center and he helps manage the Psychology of Technology Institute. Previous to this role, he led data science, research, and product teams across Facebook toward improving the societal impact of social media. His work on improving social media’s impact on society has been featured in numerous academic articles as well as in press outlets such as the Wall Street Journal, New York Times, and Wired. He specifically advocates for design-based solutions that improve social value, while mitigating concerns about over-enforcement. He also was a cofounder and the initial Chief Data Scientist of Ranker.com. He has a Ph.D. in Social Psychology from the University of Southern California. He has published dozens of scholarly articles that have collectively been cited over 10,000 times and written about in dozens of press articles. Most of his scholarly work concerns understanding human values and bridging societal divisions.
notes/quotes from session on zoom:
pattie: intro.. risks posed by ai to children and measures we can take to mitigate those risks.. how to make ai to advance people and human flourishing
ron ivey: ai and pos/neg impacts on youth: ai, business, markets, political, cultural & social systems.. close relationships essential to human flourishing.. trend in decline.. chat bots play into that trend.. from nate on power dynamics of play.. ravi on tech world.. jonathan on social and also ceo of ai co..
jonathan: human flourishing program: 1\ promotion of human flourishig 2\ methodological research.. find some way to get social sci and humanities to collab.. founded in 2016 at harvard.. flagship research project: the global flourishing study.. 240 000 participants over 22 countries in 5 waves.. data on 1st wave is out..
nate: neely center for ethical leadership.. to help those leaders we have tools/data/networks to help them.. work across academia.. tech co’s.. we take a systems approach.. donella meadows..t track user experiences.. purpose driven not profit driven.. need feedback across a lot of diff stake holders
the thing we’ve not yet tried/seen: the unconditional part of left to own devices ness
[‘in an undisturbed ecosystem ..the individual left to its own devices.. serves the whole’ –dana meadows]
tracking = not left to own devices ness.. ie: a raised eyebrow ness et al
ron: noesis collab which i founding.. building bridge between ai and human flourishing.. bridge between those building and those researching and find funding.. bring folks together in collab work
jonathan: human flourishing: risen to buzz term these days.. so we define it .. 6 domains: 1-5 – iimportant ins in own right.. health, relationship, satisfaction.. 6th is means to attaining other 5.. we don’t see these as exhaustive but universal across cultures.. an ideal state that no one attains on own.. relative attainment to state in which all aspects are good including context in which a person lives.. we all live in states where one of these things is less than ideal.. so we have a 12 item assessment and larger ones for global..
ron: social capabilities.. lists unis doing the same.. due to modern thinkers like nussbaum.. social/emotional intell at the core
jon: overview at decline of our social capabilities.. over past 18 months.. not just american/western problem.. global youth going down.. and now cohort starting much lower
measuring things – measuring whales.. oi.. need hari rat park law et al
ron: amt of time per day on consumer mobile apps.. rise of social ai.. making social isolation worse
jon: we were seeing clear benefits thru use of these tools.. support for social anxiety, mental health, communication skills, accessibility/inclusivity.. but also seeing a couple neg’s .. decrease of f-to-f interaction.. reduced empathy, emotional intell..
ie: imagine if we listen to the itch-in-8b-souls 1st thing everyday & use that data to connect us – locally.. face to face.. (tech as it could be.. ai as augmenting interconnectedness)
ron: individual impact and then what happens at scale.. how does that impact social trust and our shared sense of reality.. rising stories of troubling interactions between chat bots and individuals.. in some cases abuse.. encouraging suicide.. happening in the chat bot child relationship.. now to encounters with the industry.. jon and i were working on the loneliness question.. there was suggestion of tech as solution.. i had strong words for that.. series of events in 2023-4 to get heads around what is actually happening
jon: one thing we’ve done is bring people together on it.. ie: salon 2024.. the academic community and practitioners and investors.. how to design tech that is conducive for human connection..t
need 1st/most: means (nonjudgmental expo labeling) to undo hierarchical listening as global detox so we can org around legit needs.. tech as it could be.. ai as augmenting interconnectedness
there’s a legit use of tech (nonjudgmental exponential labeling) to facil the seeming chaos of a global detox leap/dance.. for (blank)’s sake..
ron: global institutions interested in this.. we have choices and can impact these systems
ravi: 5 recommendations.. how to improve impact on site.. 1\ realizing bots are non human 2\ protect intimacy – manip psych tactics et al 3\ allow users control 4\ measure pos/neg impacts of user experience 5\ understand high risk interactions & steps to mitigate.. make sure we have appropriate reporting.. welcome to comment in the google chat and we’ll make adjustments
ooof
link shared (via ravi) of 5 principles in chat: https://docs.google.com/document/d/1wnlrNPuh-9MEwb3l0nPTdhbN152I_HUaJJjRtg9opLo/edit?pli=1&tab=t.0 and https://www.noesiscollaborative.org/cambridge2025
ron: put this design in larger recommends to the g20: 1\ youth at center 2\ global task force 3\ design paradigm for ai chatbots 4\ codify design and principles 5\lead industry wide adoption/standardization 6\ implements continuous improvement .. intense interest on this topic esp given last couple months.. how to get involved: comment on google docs (links in chat), send research, attend aha convenings.. we’d love to hear about your research..
q&a:
ravi: on positives.. people report learning.. how to make spreadsheet formula.. we want to allow for learning.. but be careful about people starting to think of chatbot as human
jon: pos use case of chatbot: helping children w disorders.. help with communication on that.. so decreased depression..
ron: could be a case for policy/reg to protect kids.. we’re having those discussions right now.. opp of race to bottom.. which is what we’re seeing
[so much academia/institutional walling.. dang.. so focused they can’t hear unless speaking their language]
nate: really need to ed public about this.. and need carrots for business.. and stick for govt..
ron: in terms of standards piece.. rich convo happening now.. where design code is great tool for those convos.. putting it into larger standards body.. accepted practices/ways to do that..t from community perspective.. using platforms that have 85 million users.. and communicating how best to use these tools.. need community support for ie: if one family has an ai barbie.. smart phones.. et al.. civil society and faith based orgs leaning on these topics
if want legit human flourishing.. need to quit perpetuating survival triage.. @ronivey @AHA_MediaLab @PattieMaes
ravi: to know what % of kids are smoking, et al.. also % experiencing something creepy on chatbot
ravi: we have rational part and intuitive part.. kids are often more intuitive.. so we have to design that people are intuitive creatures first
not yet scrambled ness
ron: full range about that .. how ai might not have that capacity (for intuitive ness) so what age is it appropriate for child to interact w ai.. need to be thoughtful about intro tool to increase literacy.. or as a tech.. rush to get ai lit.. to throw in front of kids..
nate: lit also important for society to know what beliefs et al.. just not our area
ravi: 100% parent should be involved.. we want to design these products with agency in mind
ron: on how to balance ie: parents vs childrens rights
nate: if tried these models.. for intimacy.. they will try to isolate the user.. so one design would be to talk about friends/fam.. to counter that.. have metrics to measure relationship w family.. we’re smart we can see if it’s working or not
ron: social capabilities framework .. a protoype.. 1\ to be able to measure how these are improving or decaying
jon: there’s not sometimes a tech solutions.. sometimes you should put down the tech.. we get sucked into tech solutionism.. if you want to talk to fam.. put the phone down.. embed it w/in larger cultures/systems.. tech shouldn’t be the sum of it
oooof.. unsettling.. total seat at the table ness.. and for a global initiative.. for human flourishing.. oooof.. just perpetuating mufleh humanity law
again.. if want legit human flourishing.. need to quit perpetuating survival triage.. @ronivey @AHA_MediaLab @PattieMaes
findings from on the ground ness:
1\ undisturbed ecosystem (common\ing) can happen
2\ if we create a way to facil the seeming chaos of 8b legit free people
______
_____
______
_____
_____
_____
- ai
- advancing humans with ai (aha)
- ai as augmenting interconnectedness
- ai as nonjudgmental expo labeling
- alison on ai
- anne on ai
- ben on agi ness
- ben and joscha on conscious ai
- benjamin on ai
- cory on ai
- deb on ai
- ef on appropriate tech
- evgeny on ai
- evgeny on ai take 2
- george and michel ai convo
- george on ai
- george on metamodern ai shaman ness
- human connections dot ai
- jaron on ai
- jon on ai
- kevin (o) on ai
- mit aha speakers on ai
- monika on ai
- nicolas on ai
- noam on ai
- paul on ai
- peter on ai
- sam on ai
- tristan and aza on ai
_____
_____
_____


