model is the message

jul 2022 article in noema mag by benjamin bratton and BLAISE AGÜERA Y ARCAS

“The Model is the Message” co-written with @blaiseaguera is on recent controversies in machine sentience, synthetic language and the philo-technical problems of these at scale. LaMDA is just the beginning of what will be a strange journey. 

https://t.co/flyvIRbaBp

Original Tweet: https://twitter.com/bratton/status/1546913763286077445

notes/quotes from article:

jul 2022 article in noema mag by benjamin bratton and BLAISE AGÜERA Y ARCAS

“The Model is the Message” co-written with @blaiseaguera is on recent controversies in machine sentience, synthetic language and the philo-technical problems of these at scale. LaMDA is just the beginning of what will be a strange journey. 

https://t.co/flyvIRbaBp

Original Tweet: https://twitter.com/bratton/status/1546913763286077445

notes/quotes from article:

There are myriad issues of concern with regard to the real-world socio-technical dynamic of synthetic language. Some are well-defined and require immediate response. Others are long-term or hypothetical but worth considering in order to map the present moment beyond itself. Some, however, don’t fit neatly into existing categories yet pose serious challenges to both the philosophy of AI and the viable administration of cognitive infrastructures. Laying the groundwork for addressing such problems lies within our horizon of collective responsibility; we should do so while they are still early enough in their emergence that a wide range of possible outcomes remain possible..t Such problems that deserve careful consideration include the seven outlined below

for (blank)’s sake

Imagine that there is not simply one big AI in the cloud but billions of little AIs in chips spread throughout the city and the world — separate, heterogenous, but still capable of collective or federated learning. They are more like an ecology than a Skynet. What happens when the number of AI-powered things that speak human-based language outnumbers actual humans? What if that ratio is not just twice as many embedded machines communicating human language than humans, but 10:1? 100:1? 100,000:1? We call this the Machine Majority Language Problem

Nested within this is the Ouroboros Language Problem. What happens when language models are so pervasive that subsequent models are trained on language data that was largely produced by other models’ previous outputs?..t The snake eats its own tail, and a self-collapsing feedback effect ensues.

already have that.. ie: language as control/enclosure et al..

Will it remain possible to cleanly differentiate synthetic from human-generated media at all..t, given their likely hybridity in the future?

yeah.. we have not yet had that human generated ness at all.. already synthetic/cancerous whalespeak

wilde not-us law: most people are other people. their thoughts are other people’s opinions. their lives a mimicry. their passions a quote.  – Oscar Wilde

For large models, however, all the messiness of language is included. Critics who rightly point to the narrow sourcing of data (scraping Wikipedia, Reddit, etc.) are quite correct to say that this is nowhere close to the real spectrum of language and that such methods inevitably lead to a parochialization of culture. We call this the Availability Bias Problem, and it is of primary concern for any worthwhile development of synthetic language.

oi.. what real spectrum of language?

need: means/tech/ai to undo our hierarchical listening

Finally, the energy and carbon footprint of training the largest models is significant, though some widely publicized estimates dramatically overstate this case. As with any major technology, it is important to quantify and track the carbon and pollution costs of AI: the Carbon Appetite Problem. As of today, these costs remain dwarfed by the costs of video meme sharing, let alone the profligate computation underlying cryptocurrencies based on proof of work. Still, making AI computation both time and energy efficient is arguably the most active area of computing hardware and compiler innovation today..t

save tons time/energy/footprint.. if let go of any form of m\a\p.. and rather.. use tech/ai for augmenting interconnectedness ie: tech as it could be

Further, most of the energetic costs of computing, whether classical or neural, involve moving data around. As neural computing becomes more efficient, it will be able to move closer to the data, which will in turn sharply reduce the need to move data, creating a compounding energy benefit.

huge..

but rather.. closer to legit data.. ie: self-talk as data

mufleh humanity lawwe have seen advances in every aspect of our lives except our humanity– Luma Mufleh

Strongly committed as we are to thinking at planetary scale, we hold that modeling human language and transposing it into a general technological utility has deep intrinsic value — scientific, philosophical, existential — and compared with other projects, the associated costs are a bargain at the price.

need to focus on idiosyncratic jargon and an io dance

ie:

imagine if we just focused on listening to the itch-in-8b-souls.. first thing.. everyday.. and used that data to augment our interconnectedness.. we might just get to a more antifragile, healthy, thriving world.. the ecosystem we keep longing for..

what the world needs most is the energy of 8b alive people

In “Golem XIV,” among Stanislaw Lem’s most philosophically rich works of fiction, he presents an AI that refuses to work on military applications and other self-destructive measures, and instead is interested in the wonder and nature of the world. As planetary-scale computation and artificial intelligence are today often used for trivial, stupid and destructive things, such a shift would be welcome and necessary. For one, it is not clear what these technologies even really are, let alone what they may be for. Such confusion invites misuse, as do economic systems that incentivize stupefaction.

biggest need: means to undo our hierarchical listening

its (ai’s) ultimate form and value may still be largely undiscovered..t

because we can’t hear deep enough.. to hear/see what we legit need.. so that’s what we need it for first..

again.. as means to undo our hierarchical listening

One clear and present danger, both for AI and the philosophy of AI, is to reify the present, defend positions accordingly, and thus construct a trap — what we call premature ontologization — to conclude that the initial, present or most apparent use of a technology represents its ultimate horizon of purposes and effects.

we’ve been doing that since beginning of time.. w all techs.. same song.. we need to try something legit diff..

*Reality overstepping the boundaries of comfortable vocabulary is the start, not the end, of the conversation. Instead of a groundhog-day rehashing of debates about whether machines have souls or can think like people imagine themselves to think, the ongoing double-helix relationship between AI and the philosophy of AI needs to do less projection of its own maxims and instead **construct more nuanced vocabularies of analysis, critique, and speculation based on the weirdness right in front of us.

*ie: idiosyncratic jargon et al

**w idio jargon et al.. via imagine if we ness (meaning.. used tech/ai to listen deeper).. things like analysis, critique, speculation.. would be/become irrelevant s

__________

__________

___________

___________

__________

__________

___________

___________

__________