tristan and aza on ai

The A.I. Dilemma – March 9, 2023 – [https://www.youtube.com/watch?v=xoVJKj8lcNQ]:

Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails.

tristan harris and aza raskin – cofounders for center for humane tech – and behind doc – the social dilemma

via this tristan tweet [https://twitter.com/heif/status/1648096090049196033?s=20]:

“loneliness becomes largest national security threat” @aza

our obsession with ‘knowing ness’ over ‘connected ness’ is a cancerous distraction..

need ai as augmenting interconnectedness

notes/quotes from 70 min video:

2 min – t: ai is such an abstract and affects so many things w/o grounding metaphors.. hard to wrap head around how transformational it is.. want to arm you with more visceral way of expo curves we’re heading into

4 min – a: not saying there aren’t incredible positives

t: what we are saying.. ‘are ways we’re releasing new language models into public.. are we doing that responsibly..’ and we’re hearing that we’re not – we’re here to figure out what responsibility looks like

5 min – t: 50 % of ai researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control ai

t: we’re rapidly boarding people on this dangerous plane because of some of the dynamics we’re going to talk about.. structure of the problem .. 3 rules of tech:

a: 1\ when you invent a new tech, you uncover a new class of responsibilities .. not always obvious what those are.. 2 ie’s: didn’t need right to be forgotten written into law until computers could store remember us forever.. not obvious that cheap storage would mean need to invent new law.. and didn’t need right to privacy written into law until mass produced cameras came onto market.. and to fast forward.. we are still in process of figuring out how to write into law that which the attention econ takes from us

6 min – a: 2\ if tech confers power, it starts a race.. and 3\ if you do not coordinate, the race ends in tragedy.. there’s no one single player that can stop the race that ends in tragedy.. that’s really what the social dilemma was about

the social dilemma

7 min – t: 1st contact w ai – we already have an ai to keep you scrolling.. and that was enough to *break humanity.. (info overload, addiction, doomscrolling, influence culture, sexualization of kids, qanon, shortened attention spans polarization, bots, deepfakes, cult factories, fake news, breakdown of democracy), .. and no one intended those things to happen.. so what happens in this 2nd contact w ai.. some benefits.. a race for something ..

*humanity already broken from forever ago..

hari rat park law et al

8 min – a: 1st contact – curation ai.. 2nd contact – creation ai.. generative models and all that

t: now in this first contact.. humanity lost.. we said.. how?.. said sm going to give everyone a voice.. connect w friends.. join like minded communities.. enable reaching customers.. and these were all true.. but behind this friendly face.. there were other problems ie: addiction, disinfo, mental health polarization, censorship v speech

9 min – t: but in our work.. we said.. even behind that there is actually this even deeper thing.. this arms race.. race for attention.. race to bottom of brain stem that created this engagement monster which is ai.. that was just trying to max engage.. we missed the deeper paradigm

10 min – t: so we think if want to predict this other ai.. have to understand what’s actually behind the way the narrative reads

a: if try to solve these problems (addiction, disinfo, mental health polarization, censorship v speech) on their own .. you’re going to be playing wack a mole and not get to the generator functions.. not actually going to solve the problem..t

even deeper.. if don’t org around legit needs first.. in a way that is simple/open enough for everyone today.. going to be playing wack a mole.. not solve deep enough problem

humanity needs a leap.. to get back/to simultaneous spontaneity ..  simultaneous fittingness.. everyone in sync..

10 min – t: max engagement rewrote the rules of every aspect of our society.. it took these other core aspects of our society *(gdp, election, values, national security, reaching customers, media and journalism, politics, children’s id) into its tentacles and took them hostage.. ie: children held hostage if don’t have an id online.. all run thru engagement econ

oi.. core to sea world.. cancerous distractions to an undisturbed ecosystem.. to legit free people

findings:

1\ undisturbed ecosystem (common\ing) can happen

2\ if we create a way to ground the chaos of 8b legit free people

11 min – t: reason we’re here.. we believe major step functions in ai are coming and we want to get to it before it becomes entangled..

oi.. setting yourself up for ongoing wack a mole.. you’re not seeing the essence of us.. you’re seeing.. and trying to org around what whales are like in sea world..

t: so in 2nd contact moment w gpt3.. notice.. have we fixed misalignment w sm?.. no.. if talk about 2nd contact.. new large language models.. narrative now.. ai will make us: more efficient, write/code faster, solve impossible sci challenges, solve climate change, make a lot of money.. and these things are all true.. these are real benefits..

oh my guys.. oh my

12 min – t: and also behind that ai: bias, taking our jobs, need transparency, acting creepy.. and behind all that.. this other monster. .who is increasing capabilities and entangling itself w society.. so our purpose is to try to get ahead of that because in *2nd ai.. going to see: reality collapse, false everything, trust collageps, automated loopholes in law,..

oh my.. don’t you think we’re already (always have been ) there? oi oi oi

maybe i’ve heard enough.. we’ll see.. enough for now anyway

14 min – a: not here to talk doomsday.. there are concerns of ai.. but at same time.. i think all our experience of using ai in past.. ie: siri.. then gives high level over view of ai.. the trend lines

15 min – a: all fields from before turned into one – language – in 2017..

t: when all synthesized into language models.. everyone contributing to one curve

16 min – a: can start to treat everything as language.. so any advance in one part become advance in every part.. so advances immediately multiplicative.. and can translate between diff modalities

language as control/enclosure

17 min – a: generative, gain own capacities.. emergent capabilities.. calling them gollem

18 min – a&t: on ie’s

which to me is irrelevant.. because none of it addresses org ing around legit needs

need 1st/most: means to undo our hierarchical listening to self/others/nature so we can org around legit needs

29 min a: – loneliness becomes national security threat.. et al.. all of that is what we mean when we say 2024 – will be last human election..

already is.. missing pieces et al

30 min – a: gollem ai have emergent capabilities their programmers didn’t program

32 min – a: ie – ask ai to do arithmetic/language.. et al.. but keep increasing and boom.. huge jump to capabilities and no one knows why.. ie: ai develops theory of mind.. strategy levels

oi

39 min – a&t: more ie’s on expo curves..

42 min – t: we’re not talking about bias, loss of jobs, et al (all things listed at 7 min).. talking about the fast push.. so .. 2nd contact.. enable exponentially..

45 min – a: becomes better at any human at persuasion.. this is terrifying stuff

matters little if still in sea world

46 min – t: on co’s competing to have intimate spot in your life.. none of the things illegal..

58 min – t: on none be able to stop on own.. has to be all of us

not just all of us.. but all of us out of sea world

59 min – t: from talking to experts.. what we hear most is we need to selectively slow down the public deployment of gollem ais (large language models of ais)..t

not deepest problem.. another ie of wack a mole.. speed makes no diff if don’t get out of sea world first.. hari rat park law et al

then talk about how tos and how we need to notice the harm.. because if you mention it.. it will be like everyone is gaslighting you.. telling you you’re crazy.. saying ai is good.. it is doing good things.. problem is.. dangerous concerns undermine all other benefits..

1:07 – t: we want to find a solution that is negotiated among the players.. t

need 1st/most: means to undo our hierarchical listening to self/others/nature so we can org around legit needs

imagine if we listened to the itch-in-8b-souls 1st thing everyday & used that data to connect us (tech as it could be.. ai as augmenting interconnectedness)

________

________

_______

_______

________

________

________

_________

________

Advertisement