jaron on humanity
jaron lanier on humanity (and ai – jaron on ai) – Jaron Lanier: How humanity can defeat AI – The techno-philosopher on the power of faith – via this tweet [https://twitter.com/marigo/status/1655990062784315413?s=20] via @Type217 quote tweet.. links to may 2023 article [https://unherd.com/2023/05/how-humanity-can-defeat-ai/] of an interview w jaron by Flo Read (UnHerd‘s producer and a presenter for UnHerd TV)
notes/quotes from article:
My difference with my colleagues is that I think the way we characterise the technology can have an influence on our options and abilities to handle it. And I think treating AI as this new, alien intelligence reduces our choices and has a way of paralysing us. *An alternate take is to see it as a new form of social collaboration, where it’s just made of us. It’s a giant mash-up of human expression,..t opens up channels for addressing issues and makes us more sane and makes us more competent. **So I make a pragmatic argument to not think of the new technologies as alien intelligences, but instead as human social collaborations..t
*huge to idiosyncratic jargon ness of augmenting interconnectedness
**need 1st/most: means to undo our hierarchical listening to self/others/nature so we can org around legit needs
imagine if we listened to the itch-in-8b-souls 1st thing everyday & used that data to connect us
tech as it could be
ai as augmenting interconnectedness
I believe the approach that I’ve proposed of not thinking of what we’re doing as alien intelligences but rather a social collaboration, is the way through that problem, because it’s a way of framing it that’s equally valid, but actionable. But within the tech world, giving up those childhood science-fiction fantasies that we grew up with is really hard for people..t
sinclair perpetuation law.. graeber rethink law.. et al
If you care about people at all, if you want people to survive, you have to place your faith in the sentience of them instead of in machines as a pragmatic matter, not as a matter of absolute truth.
FR: Is the only distinction between human and machine sentience, then, a faith in the power of the human soul versus the fact that that computer is just amalgamating information?. t
I do think that the continuation of us in this timeline, in this world, and this physicality, is something I’d like to commit to. I think we might be something special. And so in that way, I’d like to apply faith to us and give us a chance, and that does involve the demoting of computers..t But when we demote computers, we can use them better. Demoting AI allows us to not mystify, and that allows us paths to explaining it, to controlling it, to understanding it, to using it as a scientific exploration of what language is. There’s so many reasons to demote it that are practical, that the faith in it as a mystical being just actually seems kind of stupid and wasteful and pathetic to me.
nonjudgmental expo labeling et al.. me other tool
FR: But can we demote something that has potentially more power than us already? Most of us are already subordinated to computers in our everyday lives.
JL: People are capable of being self-destructive, idiotic, wasteful and ridiculous, with or without computers. However, we can do it a little more efficiently with computers because we can do anything a little more efficiently with computers. I’ve been very publicly concerned about the dehumanising elements of social media algorithms. The algorithms on social media that have caused elevated outbreaks of things that always existed in humanity, but there’s just a little more: vanity, paranoia, irritability. And that increment is enough to change politics, to change mental health, especially in impoverished circumstances around the world. It’s just made the world worse incrementally. The algorithms on social media are really dumbass-simple — there’s really not a lot there. And so I think your framing of it as more powerful than us is incorrect. I think it’s really just dumb stuff. It’s up to us to decide how it fits into human society.
mufleh humanity law: we have seen advances in every aspect of our lives except our humanity– Luma Mufleh
The capacity for human stupidity is great, and, as I keep on saying, it’s only a matter of faith whether we call it human stupidity or machine intelligence. They’re indistinguishable logically. So I think the threat is real. I’m not anti-doomist. I just ask us to consider: what is the way of thinking that improves our abilities and improves our thinking — that gives us more options, gives us more clarity? And it involves demoting the computer.
ai as augmenting interconnectedness
FR: You didn’t sign the open letter demanding a hiatus in accelerating AI development, which was signed by Elon Musk and Sam Altman. Was that not appealing to you as an idea?
JL: My reason for not signing it is that it fundamentally still mystified the technology. It took the position that it’s a new alien entity. When you think it’s an alien entity, there’s no way to know how to help. . t If you have an alien entity, what regulation is good? How do you define harm? As much as the GPT programmes impress people, they don’t actually represent ideas. We don’t know how to define these things, all we can do is mash things up so that they conform with classifiers.
FR: So they can’t do the philosophical work of thinking?
JL: They could mash-up philosophers in ways that might be interesting. If you say, “Write an essay as if Descartes and Derrida collaborated”, something might come out that’s provocative or interesting. But there’s no actual representation inside there. And getting provocative or interesting mash-ups is useful, but you can’t set policy by it because there’s not actually any meaning. There’s no fundamental representation inside these things and we just have to accept that as reality. We don’t know what meaning is and we can’t represent meaning.
again.. need: nonjudgmental expo labeling
FR: Your argument relies on the idea that if we define this technology differently, then we will have more power over it, or at least we’ll have more understanding of it. Are we not just self-comforting here with a rhetoric about it being a human technology rather than something we can’t control?
JL: I don’t think that’s the case. It’s proposing a more concrete and clarified path of action. It’s very demanding of people and it’s not comforting at all. It demands that everybody involved on a technical or regulatory level do much more than they have. I suspect many people would prefer the mystical version because it actually lets them off the hook. The mystical version just lets you sit there and apprehend, and express awe at our own inventions. What I’m talking about demands action. It’s not comforting and it shouldn’t be.
It’s the surrounding stuff that created the disaster, not the core capability, which probably has been useful in general. In the same way, with this large-model AI, it’s not the thing itself, it’s the surrounding material that determines whether it’s malignant or not.. The malignancy is in the surrounding material, not in the core technology, and that’s extremely important to understand..t
huge to thinking all data to date non legit.. like from whales in sea world
I do worry about it (ai getting into wrong hands), and the antidote to it is universal clarity, context, transparency, which can only come about by revealing people, since revealing ideas is impossible because we don’t know what an idea is.
imagine if we ness
JL: Our principal encounter with algorithms so far has been in the construction of the feeds we receive in our apps. It’s in whether we get credit or not and other things like that — whether we get admitted to university or not, or whether we’re sent to prison, depending on what country we’re talking about. Algorithms have transformed us. I would hope that the criticisms of them that I and many others — *Tristan Harris, Shoshana Zuboff — have put forward have illuminated and clarified the issues with algorithms in the previous generation. But what could happen with the new AI is a worse version of all of that. Given how bad that was, I don’t think the doomerists are entirely wrong. **I think we could confuse ourselves into extinction with our own code. But, once again, in order for us to achieve that level of stupidity, we have to believe overly in the intelligence of the software, and I think we have a choice.
*tristan and aza on ai.. tristan harris.. shoshana zuboff.. zuboff unprecedented law.. age of surveillance capitalism..
**already (to me have always been) on that path.. via whalespeak perpetuating whales..
there’s a nother way
graeber make it diff law et al
*But the thing doing the damage is hiding ourselves, not the algorithm itself, which is actually just a simple dumb thing. I think there are a lot of good things about an algorithmic mash-up culture in the future. Every new instance of automation, instead of putting people out of work, could be thought of as the **platform for a new creative community.
*wilde not-us law et al
**everyday
imagine if we just focused on listening to the itch-in-8b-souls.. first thing.. everyday.. and used that data to augment our interconnectedness.. we might just get to a more antifragile, healthy, thriving world.. the ecosystem we keep longing for..
what the world needs most is the energy of 8b alive people
What really screwed over musicians was not the core capability, but this idea that you build a business model on demoting the musician, demoting the person, and instead elevating the hub or the platform. And so we can’t afford to keep on doing that. I think that is the road that leads to our potential extinction through insanity.
FR: It sounds like the answer to a lot of these problems comes down to human greed?
JL: I think humans are definitely responsible. Greed is one aspect of it, but it’s not all of it. I don’t necessarily understand all human failings within myself or anybody else, but I do feel we can articulate ways to approach this that are more practical, more actionable and more hopeful. That has to be our first duty. I think this question of diagnosing each other and saying, “This person has womb envy”, or whatever has some utility, but not a lot, and can inspire reactions that aren’t helpful. So I don’t want to emphasise that too much. I want to emphasise an approach, which we can call “data dignity”, and which opens options for us and makes things clearer.
or just legit data.. ie: self-talk as data.. as global detox/re\set
FR: What is the best case scenario if we follow that route?
JL: What I like about the new algorithms is that they help us collaborate better. You could have a new and more flexible kind of a computer, where you can ask it to change the way you present things to match your mood or your cognition under your own control, so that you’re less subservient to the computer. But another thing you can do is you can say, “I have written one essay, my friend’s written another essay, they’re sort of different. Can you mash them up 12 different ways so we can read the mash-ups?” And this is not based on ideas, it’s based on the dumb math of combining words as they appeared, in order, in context. But you might be able to learn new options for consilience between different points of view that way, which could be extraordinary. Many people have been looking at the humanistic AI world, the human-centred AI world, and asking, “Could we actually use this to help us understand potential for cooperation and policy that we might not see?”
huge.. and important.. but i think your ie’s are still missing it.. ‘potential for coop and policy’ as red flags
need nonjudgmental expo labeling
FR: So, oddly, it might break us out of our tribes and offer some human connection?
JL: It’s like if a therapist says, “*Try using different words and see if that might change how you think about something.” It’s not directly addressing our thoughts, but on the surface level it actually can help us. But it’s ultimately up to us, and there’s no guarantee it’ll help, but I believe it will in many cases. **It can help us improve our research, it can help us improve a lot of our practices, and, as long as we acknowledge the people whose ideas are being matched up by the programmes, it can help us even broaden participation in the economy, instead of throwing people out of work as so often foretold. I think we can use this stuff to our advantage, and it’s worth it to try. If we try to use it well, the most awful ideas about it turning into the Matrix or Terminator, become ***vanishingly unlikely, as long as we treat it as a human project instead of an alien intelligence.
*or no words.. ie: rumi words law.. lanier beyond words law.. et al.. idiosyncratic jargon ness of self-talk as data/detox
**red flags.. dang.. any form of m\a\p
hari rat park law.. need a way out of sea world.. not a more humane way in it
***gershenfeld something else law et al
________
_________
________
________
________
________
_________
________