advancing humans with ai (aha)

advancing humans with ai (aha) site
(programs included: aha symposium 2025;.. mit aha speakers on ai;.. )
from aha site [https://www.media.mit.edu/groups/aha/overview/]:
As AI advances, will people advance as well?..t
depends how we define/use ai.. yes if ai as augmenting interconnectedness
mufleh humanity law: we have seen advances in every aspect of our lives except our humanity– Luma Mufleh
Advancing Humans with AI (AHA) is a multi-faculty research program that aims to understand the human experience of pervasive AI and design the interaction between people and AI *to foster human flourishing. .t The program creates a larger effort around the design of human-AI interaction, **informed by a deep understanding of human needs and behavior. .t By closely collaborating with AI developers and other stakeholders in a culture of ***prototyping and experimentation, we aim for this research to have real world impact and move AI deployments in a direction benefiting humanity.
*need 1st/most: means (nonjudgmental expo labeling) to undo hierarchical listening as global detox so we can org around legit needs
**if focus on behavior won’t get to legit needs
***findings from on the ground ness:
1\ undisturbed ecosystem (common\ing) can happen
2\ if we create a way to facil the seeming chaos of 8b legit free people
AI is not just an engineering challenge, it is also a *human design problem. For AI to live up to its lofty expectations and benefit humankind, it is important that we not just optimize AI itself and make it more accurate and safe, but that we also understand how people respond to interaction with AI and **how we best design that interaction so people and humanity benefit.
*nothing to date has gotten to the root of problem
legit freedom will only happen if it’s all of us.. and in order to be all of us.. has to be sans any form of measuring, accounting, people telling other people what to do
how we gather in a space is huge.. need to try spaces of permission where people have nothing to prove to facil curiosity over decision making.. because the finite set of choices of decision making is unmooring us.. keeping us from us..
**ie: imagine if we listen to the itch-in-8b-souls 1st thing everyday & use that data to connect us (tech as it could be.. ai as augmenting interconnectedness)
the thing we’ve not yet tried/seen: the unconditional part of left to own devices ness
[‘in an undisturbed ecosystem ..the individual left to its own devices.. serves the whole’ –dana meadows]
there’s a legit use of tech (nonjudgmental exponential labeling) to facil the seeming chaos of a global detox leap/dance.. for (blank)’s sake..
ie: whatever for a year.. a legit sabbatical ish transition
otherwise we’ll keep perpetuating the same song.. the whac-a-mole-ing ness of sea world.. of not-us ness
AHA Seminar Series 2025
Join us for the MIT Media Lab’s Advancing Humans with AI (AHA) Seminar Series, where we bring together world experts on artificial intelligence and human flourishing to share their cutting-edge research and insights. This seminar series creates a forum for interdisciplinary dialogue on how AI technologies can be developed and deployed to enhance human flourishing, and well-being.
The seminar happens every other Thursday at 4:30 pm
AHA Symposium 2025: Can we design AI to support human flourishing?
April 10, 2025 | 8:30am— 5:30pm
The AHA program is excited to announce its inaugural symposium with a goal of discussing what is arguably one of the most important questions of our time: What future with AI do we want to live in and how can we design and deploy AI that improves the human experience?
The recording is available here
Why this program?
We are in a period of rapid development in AI, *with most of that development happening in industry rather than academia. Developers of foundational AI models and applications are in a race to reach AGI, spending most of their energy and attention on engineering challenges and optimization of AI models. While they are focused on issues such as improving accuracy, efficiency, safety and reducing bias, they devote less attention to **understanding how people respond to interacting with seemingly intelligent systems and how to best design models, interfaces and applications to maximize desired human outcomes.
*both cancerous distractions.. because **won’t get to ‘max’ human ness (legit free).. if any form of measuring, accounting, people telling other people what to do.. ie: intellectness as cancerous distraction et al
Research Focus
Research methodologies
The goals of the AHA program are ambitious and multifaceted. We aim to establish a new research field and community dedicated to the study of human augmentation with AI that pursues the following methods of inquiry:
Invent
Invent new models, methods, and interfaces that elevate people: Develop *novel methods and techniques for interaction with AI that increase human capabilities, agency, and flourishing..t
begs ai as augmenting interconnectedness
imagine if we just focused on listening to the itch in 8b souls.. first thing.. everyday.. and used that data to connect/coord us.. we might just get to a more antifragile, healthy, thriving world.. the ecosystem we keep longing for
humanity needs a leap.. to get back/to simultaneous spontaneity .. simultaneous fittingness.. everyone in sync.. the dance
Investigate
Investigate the positive and negative impact of AI use: Study how design choices for interactive AI affect people’s behavior and experience, both positively and negatively.
irrelevant s .. so cancerous distractions
Inspire
Inspire potential futures and applications: Design and deployment of novel AI experiences that augment people and their ability to unlock their full potential.
unlock full ness if restore missing pieces.. aka: org around legit needs
A key objective is to bridge the gap between AI developers, the academic research community, industry and target AI users by fostering an environment of joint design, experimentation, and knowledge-sharing that benefits all. We are committed to creating tangible, open AI technologies and solutions—tools, models, prototypes, methods, datasets, best practices, open source projects, and even startups / spinoffs—ensuring the initiative has practical and meaningful impact.
irrelevant s .. so cancerous distractions
the thing we’ve not yet tried/seen: the unconditional part of left to own devices ness
[‘in an undisturbed ecosystem ..the individual left to its own devices.. serves the whole’ –dana meadows]
there’s a legit use of tech (nonjudgmental exponential labeling) to facil the seeming chaos of a global detox leap/dance.. for (blank)’s sake..
ie: whatever for a year.. a legit sabbatical ish transition
Core research questions
1. Comprehension & Agency
We study how AI interactions can *enhance critical thinking skills and preserve human agency by supporting human reasoning, and helping people maintain discernment in an increasingly complex information landscape.
*again.. intellectness as cancerous distraction et al
need to try spaces of permission where people have nothing to prove to facil curiosity over decision making.. because the finite set of choices of decision making is unmooring us.. keeping us from us..
ie: imagine if we listen to the itch-in-8b-souls 1st thing everyday & use that data to connect us (tech as it could be.. ai as augmenting interconnectedness)
2. Physical & Mental Wellbeing
Our work investigates *holistic approaches to both physical and mental wellbeing, exploring how AI can **support behavior change, monitor health patterns, and provide personalized support.
*nothing to date has been legit holistic.. and still won’t be as long as any form of **these
3. Curiosity & Learning
Our research explores personalized learning experiences that *adapt to individual interests and **knowledge gaps, motivating deeper engagement through AI systems that support each person’s unique learning journey.
won’t get to *this.. if **this..
ie: imagine if we listen to the itch-in-8b-souls 1st thing everyday & use that data to connect us (tech as it could be.. ai as augmenting interconnectedness)
there were 3 more.. thinking this might be sooo frustrating to go thru
Moonshots
Complementing its extensive research agenda, the AHA program has established these ambitious moonshot goals: *1\ https://www.media.mit.edu/projects/atlas-of-human-ai-interaction/overview/ **2\ https://www.media.mit.edu/projects/flourishing-benchmarks/overview/ 3\ https://www.media.mit.edu/projects/global-observatory-for-human-impact-of-ai/overview/
*if ‘maps the complex landscape of empirical findings in human-AI interaction research’ .. nothing to date legit free.. again: mufleh humanity law et al
**if ‘Drawing from multidisciplinary research on well-being, we identify six critical domains where such benchmarks are needed to evaluate the potential for AI systems to encourage flourishing and the risk of negative outcomes. This work establishes a foundation for AI development practices that prioritize human flourishing as a central objective rather than an incidental outcome’ .. won’t get it either way (objective or incidental).. will just perpetuate same song
***’The Global Observatory monitors worldwide impact of AI adoption across cultures, tracking both positive and negative outcomes such as over-reliance and loss of agency. It serves as a central data sharing resource enabling AI developers, researchers, industry stakeholders, and users to ensure AI technologies evolve in directions that benefit humanity. ‘.. kills any potential for the unconditional part of left to own devices ness.. oof
no legit shooting for moon ness
Program pillars
Apart from the program’s research activities centered at the MIT Media Lab, a series of events and communications of the program aim to to foster broader discussion and awareness and invite collaboration and innovation with multiple stakeholders:
- Annual symposium: This event brings together leading experts, researchers, and professionals to discuss the latest advancements and challenges in human-AI interaction. Attendees can expect to gain exposure to cutting-edge ideas, connect with thought leaders in the field, and learn best practices grounded in research results.
nothing to date legit diff/new/cutting-edge.. oi
- Focused workshops: These more frequent and more intimate, hands-on sessions will dive into specific topics of Human-AI interaction, providing participants with practical skills and deeper insights and offering opportunities for hands-on collaboration with both experts, target user groups and other stake holders.
again.. kills any potential for the unconditional part of left to own devices ness..
- Social media accounts: Our regular updates using both shorter and longer formats will keep the community abreast about the latest news, research findings, and events related to human-AI interaction and its human consequences.
- Speaker series: This series will feature interviews and discussions with thought leaders and pioneers in the field providing access to thought-provoking content and the opportunity to engage with speakers.
might be keeping notes here: mit aha speakers on ai
Leadership (just leaving all their links – unhighlighted in turquoise – for now)
- Pattie Maes, co-lead of the AHA program, Germeshausen professor of Media Technology and director of the Fluid Interfaces group, has 30+ years of expertise in research at the intersection of AI and human-computer interaction. Maes’ group focuses on design of AI systems that augment decision making, learning, health and wellbeing.
- Pat Pataranutaporn, co-lead of the AHA program, MIT Media Lab postdoctoral researcher in the Fluid Interfaces group. Pat’s research lies at the intersection of AI and human-computer interaction, where he develops and studies AI systems that support human flourishing.
- Andrew Lippman, Senior research scientist, Director of the Viral Communications group & Associate Director of the MIT Media Lab
- Cynthia Breazeal, Professor at the MIT Media Lab, Director of the Personal Robots group, MIT Dean for Digital Learning
- Paul Liang, Assistant Professor at the MIT Media Lab and MIT EECS, Director of the Multisensory Intelligence research group
- Hiroshi Ishii, Professor and Associate Director at the MIT Media Lab, Director of the Tangible Media group
- Deb Roy, Professor at the MIT Media Lab, Director of MIT’s Center for Constructive Communication
- Tod Machover, Professor at the MIT Media Lab, Director of the Opera of the Future Group
- Mitchel Resnick, Professor at the MIT Media Lab, Director of the Lifelong Kindergarten group
- Joseph A. Paradiso, Professor at the MIT Media Lab, Director of the Responsive Environment group
- Rosalind Picard , Grover M. Hermann Professor in Health Sciences and Technology and director of the Affective Computing Group, has 30+ years of expertise in research related to AI, wearables, and human health and wellbeing. Picard’s group focuses on innovative solutions to help people who are not flourishing or at risk of not flourishing.
- Ramesh Raskar, Associate Professor of Media Arts and Sciences, Director of Camera Culture Group.
- Kent Larson, Professor of the Practice, Director of City Science Group
- Behnaz Farahi, Assistant Professor of Media Arts and Sciences, Director of Critical Matter group
- Zach Lieberman, Adjunct Associate Professor of Media Arts and Sciences, Director of Future Sketches group
- Dava Newman, Director of the MIT Media Lab & Apollo Program Professor of Astronautics chair at MIT
_____
_____
____
_______
- ai (up to date list on this page)
- advancing humans with ai (aha)
- ai as augmenting interconnectedness
- ai as nonjudgmental expo labeling
- alison on ai
- anne on ai
- ben on agi ness
- ben and joscha on conscious ai
- benjamin on ai
- cory on ai
- deb on ai
- ef on appropriate tech
- evgeny on ai
- evgeny on ai take 2
- george and michel ai convo
- george on ai
- george on metamodern ai shaman ness
- jaron on ai
- jon on ai
- kevin (o) on ai
- mit aha speakers on ai
- monika on ai
- nicolas on ai
- noam on ai
- paul on ai
- peter on ai
- sam on ai
- tristan and aza on ai
_______
____
____


