peter (e) on ai
peter eckersley on ai wish
via @AIObjectives mar 2023 tweet [https://twitter.com/AIObjectives/status/1632542227158138882?s=20]:
A Privacy Hero’s Final Wish: An Institute to Redirect #AI’s Future https://wired.com/story/peter-eckersley-ai-objectives-institute/ via @wired Thank you, @a_greenberg, for capturing the legacy of Peter Eckersley and our mission at AOI.
A Privacy Hero’s Final Wish: An Institute to Redirect AI’s Future
Peter Eckersley did groundbreaking work to encrypt the web. After his sudden death, a new organization he founded is carrying out his vision to steer artificial intelligence toward “human flourishing.”
ABOUT A WEEK before the privacy and technology luminary Peter Eckersley unexpectedly died last September, he reached out to artificial intelligence entrepreneur Deger Turan. Eckersley wanted to persuade Turan to be the president of Eckersley’s brainchild, a new institute that aimed to do nothing less ambitious than course-correct AI’s evolution to safeguard the future of humanity.
@degerturann: working on scalable cooperation, AI, and human flourishing – @AIObjectives – ai.objectives.institute
Turan had never actually had the chance to tell Eckersley he accepted his request. But as soon as he learned of Eckersley’s death, he knew that the role at AOI was not only the most important work he could be doing but also a way to help establish a central pillar of his friend’s legacy. “So I said yes,” Turan says. “Let’s do this—not a little further, but all the way.”
The event also served as a kind of soft launch for AOI, the organization that will now carry on Eckersley’s work after his death. Eckersley envisioned the institute as an incubator and applied laboratory that would work with major AI labs to take on the problem Eckersley had come to believe was, perhaps, even more important than the privacy and cybersecurity work to which he’d devoted decades of his career: redirecting the future of artificial intelligence away from the forces causing suffering in the world, toward what he described as “human flourishing.”..t
huge
mufleh humanity law: we have seen advances in every aspect of our lives except our humanity– Luma Mufleh
Gallagher, now AOI’s executive director, emphasizes that Eckersley’s vision for the institute wasn’t that of a doomsaying Cassandra, but of a shepherd that could guide AI toward his idealistic dreams for the future. “He was never thinking about how to prevent a dystopia. His eternally optimistic way of thinking was, ‘How do we make the utopia?’” she says. “What can we do to build a better world, and how can artificial intelligence work toward human flourishing?”..t
need 1st/most: means to undo our hierarchical listening to self/others/nature as global detox/re\set.. so we can org around legit needs
imagine if we listened to the itch-in-8b-souls 1st thing everyday & used that data to connect us (tech as it could be.. ai as augmenting interconnectedness as nonjudgmental expo labeling)
To that end, AOI is already working on a handful of example projects to push AI onto that path, now with the help of nine core contributors and a handful of grants, including $485,000 from the Survival and Flourishing Fund. *The pilot project that’s furthest along, called Talk to the City, is designed to use a ChatGPT-like interface to survey millions of people in a city to both understand their needs..t and to advocate for them in discussions with policymakers, journalists, and other citizens. Turan describes the experiment as a tool for collective organizing and for governments, enabling a form of democracy more nuanced than simple elections or referendums. He says interested beta testers for the project include everyone from the organizers of Burning Man’s Black Rock City to staffers at the United Nations.
*imagine if we listened to the itch-in-8b-souls 1st thing everyday & used that data to connect us (tech as it could be.. ai as augmenting interconnectedness as nonjudgmental expo labeling)
Another prototype, called Mindful Mirror, will serve as a kind of personal interactive journal, a chatbot that converses with its user to help them process the events of their daily life. A third, called Lucid Lens, will function as a browser plugin that highlights content it detects as being designed to cause outrage or “dopamine loops” that manipulate users in ways they’d rather be aware of or avoid.
sounds like hosting life bits.. self-talk as data.. et al
“His genius was finding the tiny little hack that would open up a big story,” EFF executive director Cindy Cohn said in her speech at Eckersley’s memorial service. “The demonstration that would make manifest what people needed to see about how technology was working.”..t
infinitesimal structures approaching the limit of structureless\ness and/or vice versa .. aka: ginorm/small ness
Even as Eckersley was in the midst of launching those influential projects, he was already thinking about an entirely different area of technology where he felt he might make an even bigger long-term impact. By 2013, Eckersley was talking to computer scientists like Anders Sandberg, Stuart Russell, and Nick Bostrom, who were focused on “civilizational risk” from AI, says Brian Christian, a scientist and author of the AI-focused books The Most Human Human and Algorithms to Live By, and who served as the master of ceremonies for Eckersley’s memorial event.
By the time he left the EFF in 2018, Christian says, Eckersley had decided it was time to refocus his efforts on shaping AI’s future. “He saw it has higher stakes, in a way,” says Christian. “He ultimately ended up concluding that the gravity of AI, even in its more hypothetical harms, was so great that it felt urgent, that it was the most important thing to him.” Christian says Eckersley was so persuasive about the magnitude of the problem that he transformed Christian’s thinking about AI’s future, too. He dedicated his 2020 book on the topic, The Alignment Problem, “to Peter, who convinced me.”
swartz most important law et al.. swartz no going back law et al..
Eckersley’s sister Nicole, who gave the final speech of the evening, said that she had begun to hear from her brother about his vision for something like AOI in 2020. And even when his health suddenly took a precipitous decline in the late summer of 2022, the institute remained his focus. “Even from his hospital bed, he was charging full speed ahead on AOI. All of his last wishes and instructions were about the survival of this incredibly important project,” she said. “We want to see Peter’s plans come to fruition. We want to keep engaged with this incredible community. We want to stop the robots from eating us and crapping out money.”
So I hope you’ll all see this not just as a memorial,” she concluded, “but as the start of an incredible living legacy.”
_________
intro’d via george pór jul 2023 tweet (and later adding george on ai) [https://twitter.com/technoshaman/status/1679072999528382466?s=20]:
Understanding How AI Will Impact Society Long-Term, by @AIObjectives
https://open.substack.com/pub/aoiarticles/p/understanding-how-ai-will-impact…
visionary content that is avoiding both techno-doomerism and techno-utopianism
links to mar 2023 article:
An overview of AOI’s approach to alignment research
The rapid emergence of advanced AI offers us an unprecedented opportunity to reach widespread human flourishing, but our current systems do not put us on that path..t As AI tools and companies start moving towards artificial general intelligence, it’s becoming more urgent for us to understand and improve the sociotechnical systems that ground and shape AI. Questions of AI governance, ethics, and law come to the forefront. It’s also imperative that we examine our own humanity: if we want AI to reflect and operate with sound human objectives and enable human flourishing, we need to understand what’s been holding us back, with or without AI.
it’s about ai as augmenting interconnectedness as nonjudgmental expo labeling.. and we’re missing it
At the AI Objectives Institute, we’re exploring what institutions and coordination strategies we can and should adopt in light of ongoing AI progress. We’re asking: what social technologies are necessary to align AI progress with human gain? What new social technologies does AI progress make possible, and how can we connect the two? For context, here are some additional areas we’re exploring:
- What are the misalignments that already permeate society and cause our collective intelligence to do harm?
- What are the coordination failures that exist in our society today, and how can we scale cooperation to empower each and every individual’s pursuits?
- What current failure points in our institutions and economy are most likely to be exacerbated by technical progress?
- How are the biases and prejudices reinforced by our current social institutions going to change as AI develops, and how can AI help us overcome our individual and collective blindspots?
- How can we use AI towards collective intent alignment, helping enhance public deliberations by identifying agreements and common grounds, disagreements and cruxes?
Avoiding bad outcomes from advanced AI and aiding human flourishing require more than just technical methods for aligning an AI to human values: we must also learn to align the sociotechnical structures that build and use AI. The institutional background and information landscape within which AI systems develop will critically determine AI’s future. Our hope at AI Objectives Institute is that by leveraging current AI to make our sociotechnical world better we can reach positive feedback loops between AI alignment and collective agency.
Where We’ve Been… Where We’re Headed
In the early 2000s, we were ready for the miracles of real-time connectivity to bring new joys into our lives and new forms of democracy to our societies. We didn’t expect freefall into fake news, echo chambers, online bullying, mental health feedback loops, and a plethora of privacy threats and scandals. From our news consumption to our wellbeing, our financial status, and our national security, social media has infiltrated our lives at every intersection.
The impact of self-improving AI systems in the coming years will be much more drastic than what we’ve experienced with social media. Our capacity for social agency will be fundamentally transformed, with impacts on individual wellbeing, our collective organization capabilities, and the reliability of our institutions. The AI Objectives Institute (AOI) was founded to help steer this transformation and its many feedback loops towards good ends..t
only if we let go of any form of m\a\p and use ai as nonjudgmental expo labeling
Our late founder, Peter Eckersley, started AOI to build a community around a series of projects with a common theme: defining goals that contribute to human flourishing, and translating them into patterns that can guide AI systems..t We’re moving forward to honor Peter’s legacy and vision and to explore how society will be affected by AI developments.
legit human freedom (legit flourish) only if org around legit needs
At AOI, we believe that the ways in which human systems will fail at managing advanced AI will not be wholly unexpected: they will take the form of familiar institutional, financial and environmental failures, which we have experienced over the last decade at unprecedented rates. The core of every existential risk is the risk that we fail to collaborate effectively, even when each of us has everything to lose. Let’s learn to coordinate in service of a future that will be better for us all..t
has to be all of us .. or the dance won’t dance
humanity needs a leap.. to get back/to simultaneous spontaneity .. simultaneous fittingness.. everyone in sync..
we need a problem deep enough to resonate w/8bn today.. a mechanism simple enough to be accessible/usable to 8bn today.. and an ecosystem open enough to set/keep 8bn legit free
ie: org around a problem deep enough (aka: org around legit needs) to resonate w/8bn today.. via a mechanism simple enough (aka: tech as it could be) to be accessible/usable to 8bn today.. and an ecosystem open enough (aka: sans any form of m\a\p) to set/keep 8bn legit free
1\ undisturbed ecosystem (common\ing) can happen
2\ if we create a way to ground the chaos of 8b legit free people
________
________
_________
- ai (up to date list on this page)
_________
________
_______


