jaron lanier

jaron lanier

above – from Jaron on Charlie Rose – may 2014

intro’d to Jaron here:

jaron post

His two books, “You Are Not a Gadget” and “Who Owns the Future?,” have galvanized opposition to the Web economy, in part because Lanier comes from the world he warns us about. An important pioneer of virtual reality who currently does research for Microsoft, he’s become a media sensation in spite of his understated, almost shy manner.

I view them as not only desirable but necessary for any kind of stable, sustainable economy or for viable democracy. And if we get that far then we should recognize that creating a strong middle class is a necessary moral, ethical and practical, pragmatic project that it fires on all cylinders. That whatever your belief system — whether you like markets better than government, or government better than markets, or some sort of societal vision better than either of those. Whatever the hell you believe in, whatever you like, middle classes are your friend and I believe that we can get to that point and become interested in how to promote them even if we might not agree entirely on every step to get there. I think all the roads converge at that point.

equity ness..

The book received mostly positive reviews, though some objected to his proposed solution – that citizens be reimbursed with micro-payments whenever their personal information led to the generation of revenue. Since people’s Facebook preferences help companies sell, for instance, and the work of human translators provide the basis for online translation programs, these people should be compensated: “A new kind of middle class, and a more genuine, growing information economy, could come about if we could break out of the ‘free information’ idea and into a universal micro-payment system.”

also get an enormous number of inquiries all over the world — every major government and every major educational institution and every think tank interested in this idea of universal micro-payments and whether it might be a path forward. I might almost say that early on there was some criticism but at this point there’s more just curiosity and it’s been very favorable and friendly so far as I can tell, extraordinarily so. There will be a lot more news about that in the next few weeks.

________

most resonating (actual/origin .. in links/notes below): lanier beyond words law

jaron on words

________

jaron on ai

jaron on humanity

________

find/follow Jaron:

wikipedia small

Jaron Lanier (/ˈɛərɨn lɨˈnɪər/, born May 3, 1960) is an American writer, computer scientist, and composer of classical music. A pioneer in the field of virtual reality (a term he is credited with popularizing), Lanier and Thomas G. Zimmerman left Atari in 1985 to found VPL Research, Inc., the first company to sell VR goggles and gloves. In the late 1990s, Lanier worked on applications for Internet2, and in the 2000s, he was a visiting scholar at Silicon Graphics and various universities. Lanier has composed classical music and is a collector of rare instruments; his acoustic album, Instruments of Change (1994) features Asian wind and string instruments such as the khenemouth organ, the suling flute, and the sitar-like esraj. Lanier was the director of an experimental short film, and teamed with Mario Grigorov to compose the soundtrack to the documentary film, The Third Wave (2007). In 2010, Lanier was nominated in the TIME 100 list of most influential people.

holy crap:

Born Jaron Zepel Lanier in New York City, Lanier was raised in Mesilla, New Mexico. Lanier’s mother and father were Jewish immigrants from Europe; his mother was a survivor from a Vienna concentration camp and his father’s family had emigrated from Ukraine to escape the pogroms. When he was nine years old, his mother was killed in a car accident. He lived in tents for an extended period with his father before embarking on a seven-year project to build ageodesic dome home that he helped design. At the age of 13, Lanier convinced New Mexico State University to let him enroll. At NMSU, Lanier met Marvin Minsky and Clyde Tombaugh, and took graduate-level courses; he received a grant from the National Science Foundation to study mathematical notation, which led him to learn computer programming. From 1979 to 1980, the NSF-funded project at NMSU focused on “digital graphical simulations for learning”. Lanier also attended art school in Manhattan during this time, but returned to New Mexico and worked as a midwife. The father of a baby he helped deliver gave him a car as a gift; Lanier drove the car to Los Angeles to visit a girl whose father happened to work in the physics department at the California Institute of Technology, where Lanier met and conversed with Richard Feynman and Murray Gell-Mann.

In “One-Half a Manifesto”, Lanier criticizes the claims made by writers such as Ray Kurzweil, and opposes the prospect of so-called “cybernetic totalism”, which is “a cataclysm brought on when computers become ultra-intelligent masters of matter and life.” Lanier’s position is that humans may not be considered to be biological computers, i.e., they may not be compared to digital computers in any proper sense, and it is very unlikely that humans could be generally replaced by computers easily in a few decades, even economically. While transistor count increases according to Moore’s law, overall performance rises only very slowly. According to Lanier, this is because human productivity in developing software increases only slightly, and software becomes more bloated and remains as error-prone as it ever was. “Simply put, software just won’t allow it. Code can’t keep up with processing power now, and it never will.” At the end he warns that the biggest problem of any theory (esp. ideology) is not that it is false, “but when it claims to be the sole and utterly complete path to understanding life and reality.” The impression of objective necessity paralyzes the ability of humans to walk out of or to fight the paradigm and causes the self-fulfilling destiny which spoils people.

…he is critical of the denatured effect which “removes the scent of people”.

Lanier argues that the search for deeper information in any area sooner or later requires that you find information that has been produced by a single person, or a few devoted individuals: “You have to have a chance to sense personality in order for language to have its full meaning.” That is, he sees limitations in the utility of an encyclopedia produced by only partially interested third parties as a form of communication.

____________

his site:

jaron lanier site

______________

Jaron on colbert:

jaron lanier on colbert

________

Jaron on Charlie Rose: http://www.charlierose.com/watch/60362672

3 major data problems: 2008, nsa, healthcare

problem is statistics work for so long.. but then reality doesn’t fit 

[he gave examples of – knowing what to buy, and how to vote – what if both of those are not natural for us.. what if both of those aren’t something we’d be doing if we had the choice]

ted nelson – 1960 – behind the automation we still need people – so if we could just pay them with dignity

the only way your wealth can mean anything is if you’re part of a functioning society

what i’m interested in is find moments we can experiment with alternate models..

perhaps a people experiment… ?

create a middle class that will persist no matter how good the robots get…

_____________

2011:

http://www.newyorker.com/magazine/2011/07/11/the-visionary

“If you listen first, and write later, then whatever you write will have had time to filter through your brain, and you’ll be in what you say. This is what makes you exist. If you are only a reflector of information, are you really there?”

[..]

“I’m trying to stay focussed on the long game, not the item of the week. Because the issues I’m talking about will take a long time to address.”

[..]

He played a dark, plinky composition in what sounded like a minor key. “It’s not really minor,” Lanier said when I inquired. He played another dissonant progression. “It’s not that simple.” He gazed upward and added, “I’m really interested in scales that are harder to resolve.”

[..]

Lanier has continued to argue that the purpose of digital technology should be to enrich human interaction.

[..]

Unlike more Luddite critics, Lanier complains not that technology has taken over our lives but that it has not given us enough back in return. In place of a banquet, we’ve been given a vending machine.

[..]

“The thing about technology is that it’s made the world of information ever more dominant,” Lanier told me. “And there’s so much loss in that. It really does feel as if we’ve sworn allegiance to a dwarf world, rather than to a giant world.”

[..]

born—on May 3, 1960—first to Colorado,

[..]

..that somebody would love this thing, and want to know me,” Lanier recalls.

[..]

With Tombaugh and other scientists, Lanier found that it was possible to have long conversations about abstract subjects like mathematics “without even being there yourself”—that is, with little emotional connection.

[..]

Lanier, she notes, never saw virtual reality as simply a useful technology. “It had to do with being able to be in somebody else’s mind with him,” she said. “With creating a kind of ultimate connection and communication.”

[..]

“Had she lived, I think I would have been more conventionally successful,” he said. “I think I would be, like, a Harvard Med School professor or something. My dad was more into ‘Be the Buckminster Fuller or the Frank Lloyd Wright’—be the weird outsider who becomes influential. Which is kind of where I ended up

_____________

interview with Kevin Kelly – sept 2014:

http://www.theverge.com/a/virtual-reality/interview#interview

Computational holography ….you’re computing all these tiny edges and because of the quantum nature of life, when light encounters an edge, it can bend a little bit.

We’re getting to the point where we can really calculate fields of energy instead of dealing with just the bulk manipulation of a field, like with a lens. That’s transformative.

For me, the very most important thing about VR was that when you were in it, you’d feel your own existence, in the sense that if all the sensory input is artificial, then what’s floating there, that’s your consciousness. So to me, it was sort of proof that subjectivity is real; that consciousness is real, that it’s not just a construct that we put on things. Just to notice that you really exist, to me, was the very, very core of it. There were a zillion and one variations on that that [could] become really vivid and colorful in different ways. But that was always the core for me. And extending from that, this possibility of a kind of communication that would involve directly creating what people sense in common instead of relying as much on symbols such as words.

jaron on words

no words ness

Post-symbolic communication, yeah.

You know, I think all of us had the sense of mission that we were really doing something that would open up the world, and that a lot of mankind’s problems were kind of just artificial and due to inadequate technology: if we could just have better communication and all this stuff, a lot of problems would clear up.

I had to reconsider that ideology at great personal pain because I didn’t want to question it.

There was a time, up until around the turn of the century, I was writing fire-breathing essays like, “Piracy is your friend” and “Open everything up and it’ll work out.” Then, .., I realized that what was actually happening was the loss of the middle hump of outcomes; we were concentrating people into winners and losers, which is the worst outcome. I’ve also become really concerned about VR’s role in that.

This notion that you could see VR as a way to screw with people without their awareness, crossed that with our current business model where everything is about advertising and manipulation and spying, we [will] have a surveillance economy in the online world. It’s been very painful to see that potential unfolding.

I think every technical person is obliged to think about how we can move towards a world that really serves people, rather than splitting us into an elite and everybody else.

We have to evolve out of what we’re calling the advertising business model. If you extend the idea of advertising to total surveillance in the way that we’re doing it, ..

Obviously, I’m hoping Facebook’s business model will evolve by the time they ship something. Facebook is kind of painting itself into a corner where both it and Google are in this mutual embrace of making each other more and more creepy in battle. And they have to find some way out of that.

The one that I’ve been trying to push the hardest was the origin point for networking when it started — Ted Nelson’s idea of the universe of micropayments.

Second Life was a failure in terms of a design that could interface with existing laws and existing economic systems. It had a huge problem related to taxation and regulation. [With] technology and idealism, you wish away these things, but you can’t. There was a degree of fraud, there was a degree of bullying, but overall Second Life was kind of impressive.

To me, that contrast, that feeling that you have when you’re out of it after you’ve used it, has universally been more precious than what happens in it.

…using field programmable data rays as a cloud architecture so you can just reconfigure the whole cloud all the time instead of having a fixed processor design.

I have been part of a lot of conversations about what the laws should be and trying to come up with regulations. How do you protect the kid who’s being bullied without impinging on freedom of speech? How do you prosecute revenge porn without empowering some politician? Or the right to be forgotten — how you do that without empowering some politician? It’s these kinds of discussions that led me to become more interested in Ted Nelson’s original ideas about micropayments. In a lot of ways, if you can make an economy adjust for these sorts of things instead of adjudication with rules, it just works better for everybody.

____________

interview/talk (video) on the myth of ai via edge:

http://edge.org/conversation/the-myth-of-ai

us supreme court declaring corporation a person. corp and algorithm blurring.. so it algorithm (machine) and person blurring..

the biggest threat to us – is ai as a fake thing. ie: less threat to us if it was a real thing.

measurement vs manipulation.. ness

not so much a rise of evil as a rise of incompetence

benefit to have something approximate right away.. but because of the mythology of ai.. services are presented.. but they are not free standing services

translators facing a problem similar to recording artists or investigative journalists, photographers.. still needed.. for the big data scheme to work. translation hasn’t been made obsolete.. the structure has been optimized.. but people still needed.

so this pattern of ai working when we have big data.. but then not needing people.. ie: not being paid

i don’t buy the argument that they are getting paid – because they get all the free too. i don’t think society works w/o formal economic benefits. informal benefits aren’t enough.

ai as a set of techniques.. field of study in say math, if we talk about it as a mythology.. then we get into bad interfaces/incompetence/economy.. not knowing if manipulations involved.. so the mythology is the problem not the tech/algorithms

the whole ai thing in a sense distracts us from what the real problem is.. ie: as a society – we have to do something where people don’t want to be making killer drones. a way of profoundly avoiding the political problem… needing new societal structure that isn’t perfect anarchy.

if you could prevent ai from ever actually – it would have nothing to do with the problem we fear.

damage – when we pretend to understand things that we don’t really do.. ie: the brain

pre-mature mystery reduction – you have to accept your own ignorance

computer world is so influential.. because it has so much money.. ie: human brain project in europe

ai – and others inventions – as a mirror of this mythology of a deity that you need to fear.. also looks like the new economy – elite with the verbiage/language..

hard to talk about because the accepted vocabulary undermines you at every turn…

vocab doesn’t give you easy ways to distinguish…

sounds like – prejudice decreases as discrimination increases.. and Jaron’s wish for – beyond words ness..

_____________

Ray and Jaron and.. differences.. that could be the same.. no? (nov 2014):

http://www.vanityfair.com/culture/2014/11/artificial-intelligence-singularity-theory

Duelling over the Singularity: Ray Kurzweil, who sees salvation in artificial intelligence; Jaron Lanier, a leading skeptic.

A technological singularity is a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict.”

In Kurzweil’s The Age of Spiritual Machines there’s only a single footnote reference to the “Technological Singularity,” a term popularized a few years earlier by the computer scientist and science-fiction writer Vernor Vinge for the moment when machine intelligence surpasses the human kind. Vinge was deeply ambivalent about what he considered this inevitable near future. But Kurzweil was only excited, and determined to lead a movement to bring it on. He called his 2005 manifesto The Singularity Is Near and gave it a happy, triumphalist subtitle: When Humans Transcend Biology.

But the skeptics’ camp has grown. Its members include Jaron Lanier, the virtual-reality pioneer, who now works for Microsoft Research; Mitch Kapor, the early P.C.-software entrepreneur who co-founded the Electronic Frontier and Mozilla foundations and has bet Kurzweil $10,000 that no computer will pass for human before 2030; Microsoft co-founder Paul Allen, who has an eponymous neuroscience-research institute and picked a fight with Kurzweil and the Singularitarians, accusing them of vastly underestimating the brain’s complexity; Jaan Tallinn, the Estonian software engineer who helped create Kazaa and co-founded Skype, and who worries that “once technological development is yanked out of our hands”—with more autonomous and self-replicating computers—“it doesn’t have to continue to be beneficial to humans”; and Elon Musk, the co-founder of Tesla Motors and the founder of the commercial space-travel company SpaceX, who says that “with artificial intelligence we are summoning the demon” and call A.I. probably “our biggest existential threat,” one that may well cause something “seriously dangerous” within a decade.

In addition to their certainty and optimism, Singularitarians place great faith in the power of unfettered individuals—that is, themselves—to usher in the amazing future.

so imagine leveraging 7 billion people to usher it in. that’s exponentiation. and equity. and humanity. and democracy. no?

Kurzweil and Kapor will need to live to 81 and 79, respectively, to see who wins their Turing-test bet, in 2029.

so what if that wasn’t our goal here… and what if it happened way sooner that 2029. document everything Ada style..

I told Ray I’d double or triple the bet. Human intelligence is a marvelous, subtle, and poorly understood phenomenon. There is no danger of duplicating it anytime soon.  – Mitch Kapor

Lanier gets peevish even being asked to identify a moment when machine intelligence might become convincingly human. “This idea of a certain year is ridiculous. It’s a cultural event. It’s a form of theater. It’s not science. There’s no rigor. It’s like saying, ‘When will hip-hop really be an art form?’ To take it seriously is to make yourself into a moron. It came from one of the most brilliant technical minds”—Turing—“so we give it credence.”

But still, I pressed him, during some of our lifetimes won’t computers be totally fluent in humanese—able to engage in any kind of conversation? Lanier concedes some ground. “It’s true, in some far future situation, we’re going to transition. . . . I think it’s very hard to predict a year.” Approximately when? “I think we’re in pretty safe territory if we say it’s within this century.” Which is much sooner than I figured he’d meant by “far future.”

An unalloyed engineering paradigm, Lanier explained, assumes “a kind of a linear, clean quality to progress that’s just never true. Technologists tend to think of economics as unimportant because it always favors us. This idea that everything will become cheaper all at once is stupid.”

Kurzweil as much as admits he only deeply cares and knows about technology and its theoretical impacts, about political economy and human psychology not so much.

In his 2010 book, You Are Not a Gadget, Jaron Lanier made a cultural argument against our worshipful deference to computers. His most recent book, Who Owns the Future?, is all about politics, economics, power, jobs. It’s not sentient machine overlords enslaving us in 2040 that alarms him, but human overlords using computer technology to impoverish and disempower the middle and working classes, starting right now.

A.I. has turned into this way of masking human contributions..

if we’re entering an unprecedented new technological era we also need to create an unprecedented new political economy to cope.

indeed.. another way

_____________

you are not a gadget

book links to amazon (2011)

– – –

youa are not a gadget kindle

notes/highlights:

“What is a person?” If I knew the answer to that, I might be able to program an artificial person in a computer. But I can’t. Being a person is not a pat formula, but a quest, a mystery, a leap of faith.

why choice.. whimsy .. matters. without that.. not human. without that.. missing out on the potential of humanity.

The process of significantly changing software in a situation in which a lot of other software is dependent on it is the hardest thing to do. So it almost never happens.

systemic change gravely needed..

The fateful, unnerving aspect of information technology is that a particular design will occasionally happen to fill a niche and, once implemented, turn out to be unalterable. It becomes a permanent fixture from then on, even though a better design might just as well have taken its place before the moment of entrenchment.

we need to let go of irrelevant tech. (perhaps voting, money, words, school, …)

The consequences of tiny, initially inconsequential decisions often are amplified to become defining, unchangeable rules of our lives.

shed ness

After MIDI, a musical note was no longer just an idea, but a rigid, mandatory structure you couldn’t avoid in the aspects of life that had gone digital. The process of lock-in is like a wave gradually washing over the rulebook of life, culling the ambiguities of flexible thoughts as more and more thought structures are solidified into effectively permanent reality.

most people are other people ness. too much ness.

Lock-in Turns Thoughts into Facts

science of people in schools, et al..

we can compare lock-in to scientific method… Karl Popper – science is a process that disqualifies thoughts as it proceeds – on can, for example, no longer reasonably believe in a flat earth that sprang into being some thousands of years ago. science removes ideas from play empirically, for good reason.

mostly.

Lock-in, however, removes design options based on what is easiest to program, what is politically feasible, what is fashionable, or what is created by chance. Lock-in removes ideas that do not fit into the winning digital representation scheme, but it also reduces or narrows the ideas it immortalizes, by cutting away the unfathomable penumbra of meaning that distinguishes a word in natural language from a command in a computer program.

How can a musician cherish the broader, less-defined concept of a note that preceded MIDI, while using MIDI all day long and interacting with other musicians through the filter of MIDI? Is it even worth trying? Should a digital artist just give in to lock-in and accept the infinitely explicit, finite idea of a MIDI note?

problem with communication.. GB Shaw

If it’s important to find the edge of mystery, to ponder the things that can’t quite be defined—or rendered into a digital standard—then we will have to perpetually seek out entirely new ideas and objects, abandoning old ones like musical notes.

Throughout this book, I’ll explore whether people are becoming like MIDI notes—overly defined, and restricted in practice to what can be represented in a computer. This has enormous implications: we can conceivably abandon musical notes, but we can’t abandon ourselves.

By now, MIDI has become too hard to change, so the culture has changed to make it seem fuller than it was initially intended to be. We have narrowed what we expect from the most commonplace forms of musical sound in order to make the technology adequate.

As a result, UNIX is based on discrete events that don’t have to happen at a precise moment in time. The human organism, meanwhile, is based on continuous sensory, cognitive, and motor processes that have to be synchronized precisely in time. (MIDI falls somewhere in between the concept of time embodied in UNIX and in the human body, being based on discrete events that happen at particular times.) UNIX expresses too large a belief in discrete abstract symbols and not enough of a belief in temporal, continuous, nonabstract reality; it is more like a typewriter than a dance partner.

Ada ness. oh the dance.

.. the ghost of UNIX, still refusing to accommodate the rhythms of my body and my mind, after all these years. I’m not picking in particular on the iPhone (which I’ll praise in another context later on). I could just as easily have chosen any contemporary personal computer. Windows isn’t UNIX, but it does share UNIX’s idea that a symbol is more important than the flow of time and the underlying continuity of experience.

UNIX had files; the Mac as it shipped had files; Windows had files. Files are now part of life; we teach the idea of a file to computer science students as if it were part of nature. In fact, our conception of files may be more persistent than our ideas about nature. I can imagine that someday physicists might tell us that it is time to stop believing in photons, because they have discovered a better way to think about light—but the file will likely live on.

What do files mean to the future of human expression? This is a harder question to answer than the question “How does the English language influence the thoughts of native English speakers?”

do we need words.. ness.. how they limit us.

.. philosophies of how humans can express meaning have been so ingrained into the interlocked software designs of the internet that we might never be able to fully get rid of them, or even remember that things could have been different.

again – back before the internet.. any definings of words/symbols/rules/people ..

Part of why this happened is that volunteerism proved to be an extremely powerful force in the first iteration of the web. When businesses rushed in to capitalize on what had happened, there was something of a problem, in that the content aspect of the web, the cultural side, was functioning rather well without a business plan.

It turns out that the digital system of representing people and ads so they can be matched is like MIDI.

lms ness.

Anyone who wants to place ads must use it, or be out in the cold, relegated to a tiny, irrelevant subculture, just as digital musicians must use MIDI in order to work together in the digital realm. In the case of Google, the monopoly is opaque and proprietary.

Tim Berner’s Lee – tiny little webs won’t work.. it won’t work unless the whole world can use it..

encouraged to create standardized presences on sites like facebook. commercial interests promoted the widespread adoption of standardized designs like the blog..

identity ness

tribal accession – the way we got here is that one subculture of technologists has recently become more influential than the others…  composed of the folks from the open culture/creative commons world, the linux community, folks associated with the artificial intelligence approach to computer science…. their capital is silicon valley, but they have power bases all over the world….their favorite blogs include boing boing. techcrunch, and slashdot, and their embassy in the old country is wired.

the groupthink problem i’m worried about isn’t so much in the minds of the technologists themselves, but in the minds of the users of the tools the cybernetic totalists are promoting..

The central mistake of recent digital culture is to chop up a network of individuals so finely that you end up with a mush. You then start to care about the abstraction of the network more than the real people who are networked, even though the network by itself is meaningless. Only the people were ever meaningful.

.. innovate in order to find a way to describe your internal state instead of trivial external events, to avoid the creeping danger of believing that objectively described events define you, as they would define a machine.

labels ness

 Many of today’s Silicon Valley intellectuals seem to have embraced what used to be speculations as certainties, without the spirit of unbounded curiosity that originally gave rise to them. Ideas that were once tucked away in the obscure world of artificial intelligence labs have gone mainstream in tech culture. The first tenet of this new culture is that all of reality, including humans, is one big information system. That doesn’t mean we are condemned to a meaningless existence. Instead there is a new kind of manifest destiny that provides us with a mission to accomplish. The meaning of life, in this view, is making the digital system we call reality function at ever-higher “levels of description.” People pretend to know what “levels of description” means, but I doubt anyone really does.

Another example is what I call the “race to be most meta.” If a design like Facebook or Twitter depersonalizes people a little bit, then another service like Friendfeed—which may not even exist by the time this book is published—might soon come along to aggregate the previous layers of aggregation, making individual people even more abstract, and the illusion of high-level metaness more celebrated.

.. what if information is inanimate? What if it’s even less than inanimate, a mere artifact of human thought? What if only humans are real, and information is not? Of course, there is a technical use of the term “information” that refers to something entirely real. This is the kind of information that’s related to entropy. But that fundamental kind of information, which exists independently of the culture of an observer, is not the same as the kind we can put in computers, the kind that supposedly wants to be free. Information is alienated experience.

Experience is the only process that can de-alienate information. Information of the kind that purportedly wants to be free is nothing but a shadow of our own minds, and wants nothing on its own. It will not suffer if it doesn’t get what it wants.

in response to that last highlight (via twitter):

@monk51295 Hmmm…I’m having to read that a few times!
Original Tweet: https://twitter.com/coollit/status/551034680842395649

which – my first response was.. that’s exactly the point.. i think. what do i know.. what does Jaron know. what does Lisa know. what culture/space are we in. even reading that in seclusion a few times might not get you anywhere.. at least not where Jaron was going/meaning. words have so many layers on them. ie: free, experience. so rather than add to out of context to Jaron layers.. i also highlighted this:

You can think of culturally decodable information as a potential form of experience, very much as you can think of a brick resting on a ledge as storing potential energy. When the brick is prodded to fall, the energy is revealed. That is only possible because it was lifted into place at some point in the past. In the same way, stored information might cause experience to be revealed if it is prodded in the right way. A file on a hard disk does indeed contain information of the kind that objectively exists. The fact that the bits are discernible instead of being scrambled into mush—the way heat scrambles things—is what makes them bits. But if the bits can potentially mean something to someone, they can only do so if they are experienced. When that happens, a commonality of culture is enacted between the storer and the retriever of the bits.

very curious about how we respond to each other. what little bits lead to. if we are listening enough in that process for it to become experience and/or levels of description

People degrade themselves in order to make machines seem smart all the time. Before the crash, bankers believed in supposedly intelligent algorithms that could calculate credit risks before making bad loans. We ask teachers to teach to standardized tests so a student will look good to an algorithm. We have repeatedly demonstrated our species’ bottomless ability to lower our standards to make information technology look good.

Whenever a computer is imagined to be intelligent, what is really happening is that humans have abandoned aspects of the subject at hand in order to remove from consideration whatever the computer is blind to.

When people are told that a computer is intelligent, they become prone to changing themselves in order to make the computer appear to work better, instead of demanding that the computer be changed to become more useful.

The most important thing to ask about any technology is how it changes people.

..the danger of an engineer pretending to know more than he really does is the greater danger, especially when he can reinforce the illusion through the use of computation.

ridiculous ness

The mysterious stuff can be shuffled around, but it is best to just admit when some trace of mystery remains, in order to be able to speak as clearly as possible about the many things that can actually be studied or engineered methodically.

i don’t know ness

We know a little about what Aztec or Inca music sounded like, for instance, but the bits that were trimmed to make the music fit into the European idea of church song were the most precious bits. The alien bits are where the flavor is found.

flavor – we’re missing it..

.. it seems pointless to insist that what we already understand must suffice to explain what we don’t understand.

einstein.. ness

There’s an odd lack of curiosity about the limits of crowd wisdom.

there are recognizable stages in the degradation of anonymous, fragmentary communication. if no pack has emerged then individuals start to fight.

we evolved to be both loners and pack members. we are optimized not so much to be one or the other, but to be able to switch between them.

networked individualismre\wire

there are reasonable theories about what brings out the best/worst online behaviors: demographics, economics, child-rearing trends, perhaps even the average time of day of usage could ply a role. my opinion, however , is that certain details in the design of the user interface experience of a website are the most important factors.

based on those data, you could conclude that it ins’t exactly anonymity, but transient anonymity, coupled with a lack of consequences, that brings out online idiocy… drive-by anonymity

it is easy to be anonymous or fully revealed, but hard to be revealed just enough.

how do i hide

.. to have a substantial exchange, however, you need to be fully present.

.. rise of the web in the early 1990s took place without leaders, ideology, advertising commerce, or anything other than a positive sensibility shared by millions of people. …. it stands to reason, however, that the net can also accentuate negative patterns… manhattan project ness (atomic bomb white sands nm)

.. ideology of violation does not radiate from the lowest depths of trolldom, but from the highest heights of academia. there are respectable academic conferences devoted to methods of violating sanctities of all kinds. the only criterion is that researchers com up with some way of using digital technology to harm innocent people who thought they were safe.

oh my.

in 2008.. uni of massachusetts at amherst and uni of washington presented papers at two of these conferences (called defcon and black hat), …. they had spent two years of team effort figuring out how to use mobile phone technology to hack into a pacemaker and turn it off by remote control, in order to kill a person.

oh my.

the reason i call this an expression of ideology is that there is a strenuously constructed lattice of arguments that decorate this murderous behavior so that it looks grand and new if the same researchers had done something similar without digital technology, they would at the very least have lost their jobs.

a summary of the ideology goes like this: all those nontechnical, ignorant innocent people out there are going about their lives thinking that they are safe, when in actuality they are terribly vulnerable to those who are smarter than they are. therefore, we smartest technical people ought to invent ways to attack the innocents, and publicize our results, so that everyone is alerted to the dangers of our superior powers. after all, a clever evil person might come along.

?

.. the example of the pacemakers is entirely different (than things that can be fixed before they exploit harm). the rules of the cloud apply poorly to reality. it too two top academic labs two years of focues effort to demonstrate the exploit, and that was only possible because a third lab at a medical school was able to procure pacemakers and information about them that would normally be very hard to come by. …

there will always be a new exploit, because there is no such thing as perfect security.

perhaps less the more we offer something else to do for everyone.. (usefully preoccupied)

w/pacemaker – no improvement has taken place only harm

those who disagree with the ideology of violation are said to subscribe to a fallacious idea known as “security through obscurity.” smart people aren’t supposed to accept this strategy for security, because the internet is supposed to have mad security obsolete.

security ness

surely obscurity is the only fundamental form of security that exists, and the internet by itself doesn’t make it obsolete.

exactly. – sync matters… all people free.. (as the day ness) resources available.. (in the city ness) – everyday.

one way to deprogram academics who buy into the pervasive ideology of violation is to point out that security through obscurity has another name in the world of biology: biodiversity.

the reason that computer viruses infect pcs more than macs is not that a mac is any better engineered, but that it is relatively obscure. pcs are more commonplace. this means that there is more return on the effort to crack pcs.

reverse of discrimination increases.. ness..

prejudice long

another predictable element of the ideology of violation is that anyone who complains about the rituals of the elite violators will be accused of spreading fud – fear, uncertainty, and doubt. but actually it’s the ideologues who seek publicity. the whole point of publicizing exploits like the attack on pacemakers is the glory. if that notoriety isn’t based on spreading fud, what is?

Whenever we notice an instance when history was swayed by accident, we also notice the latitude we have to shape the future.

outbreaks of nasty online behavior go back quite a way, to the history of the counterculture in america, and in particular to the war on drugs.

usenet.. reserved for nonacademic topics.. why did usenet support drive-by anonymity?…  those crafting it..(academic, corporate, military), had means to make it otherwise.

facebook is similar to no child left behind

Personal reductionism has always been present in information systems. You have to declare your status in reductive ways when you file a tax return. Your real life is represented by a silly, phony set of database entries in order for you to make use of a service in an approximate way.

Information systems need to have information in order to run, but information underrepresents reality.

What computerized analysis of all the country’s school tests has done to education is exactly what Facebook has done to friendships. In both cases, life is turned into a database. Both degradations are based on the same philosophical mistake, which is the belief that computers can presently represent human thought or human relationships. These are things computers cannot currently do.

while there is a lot of talk about networks and emergence from the top american capitalists and technologists, in truth most of them are hoping to thrive by controlling the network that everyone else is forced to pass through. everyone wants to be a lord of a computing cloud.

the wanting to have the party at your places.. ness

both camps (some want to pay/get-paid for intellectual property et al.. wants it free) are hoping that one way or another they will own the central nodes of the network even as they undermine each other.

intellectual property

the wave of financial calamities that took place in 2008 were significantly cloud based.  no one in the pre-digital cloud era had the mental capacity to lie to him – or herself in the way we routinely are able to now.

Even when n is large, there’s no guarantee it’s valid.

big n refers to n, a typical symbol for a mathematical variable. if you have a giant social network like facebook, perhaps some variable called n gains a big value. as n gets larger, statistics become more reliable…. however, it must be pointed out that in practice, even if you believe in the big n as a substitute for judgment, n is almost never big enough to mean anything on the internet.

Without risk, there is no need for skill.

Entitlement has achieved its singularity and become infinite. Unless the algorithm actually isn’t perfect. But we’re rich enough that we can delay finding out if it’s perfect or not. This is the grand unified scam of the new ideology.

In each case, human creativity and understanding, especially one’s own creativity and understanding, are treated as worthless. Instead, one trusts in the crowd, in the big n, in the algorithms that remove the risks of creativity in ways too sophisticated for any mere person to understand.

ch 7 – talks about early on… idea that need for money might be eliminated.. then says.. “for foreseeable future..we seem committed to using money for rent, food, and medicine”… then brings up Ted Nelson’s idea.. reward via hits/clicks.. (like wesch’s class site) – then says push back was that very few would do anything.. if not a guaranteed reward.. but – “the popularity of amateur content today provides an answer..”

if we idealists had only been able to convince those skeptics, we might have entered into a different, and better, world once it became clear that the majority of people are indeed interested in and capable of being expressive in the digital realm. .. someday i hope there will be a genuinely universal system along the lines proposed by Nelson. i believe most people would embrace a social contract in which bits have value instead of being free.

let’s go one better.. and go back to early on.. imagining no money. it’s almost like we’re reducing ourselves (like you’re saying we do) because of what the computer (or many of us who have become machine-like) are blind to. that ie: money is man made due to lack of trust. it’s not a given.. it’s not a matter of foreseeable future if we believe that risk matters. that art matters. that we do have the means to ground chaos. and that people are amazing.

if we stay bound up in the thought that we need rent et al.. then the cycle remains. next comes.. who decides what value is.. et al. no? let’s realize the time we are living in. that today we can take that leap. for good.

It’s important to remember the extreme degree to which we make everything up in digital systems, at least during the idyllic period before lock-in constricts our freedoms. Today there is still time to reconsider the way we think about bits online, and therefore we ought to think hard about whether what will otherwise become the official future is really the best we can do.

yes.

The scarcity of money, as we know it today, is artificial, but everything about information is artificial. Without a degree of imposed scarcity, money would be valueless.

a sudden advent of socialism, just after everyone has slid down maslow’s pyramid into the mud, is lieklly to be dangerous. the worng people often take over when a revolution happens suddenly. (see: iran.)

unless perhaps.. if everyone, everyone, has something else to do..

so.. if socialism is where we are headed, we ought to be talking about it now so that we can approach it incrementally. if it’s too toxic a subject to even talk about openly, then we ought to admit we on’t have the abilities to deal with it competently.

hmm. maybe we are (now – how many years after you wrote this 2008? 2011?).. within the incremental ness. whether or not people are actually sharing in the sharing economy.. at least they are talking about it. incremental makes me think of the strain of the 10 day care centers. talking makes me think of policy getting in the way. like – we have to make sure we have a plan before we try something new… which is too much delay.. most all of us are not rich enough to endure delay. especially when we don’t have to. we can do better today.

The plausibility of our human world, the fact that the buildings don’t all fall down and you can eat unpoisoned food that someone grew, is immediate palpable evidence of an ocean of goodwill and good behavior from almost everyone, living or dead. We are bathed in what can be called love.

Economics is about how to best mix a set of rules we cannot change with rules that we can change.

possibly/currently/extremely blind to the rules we can change.. no?

Sometimes people decide to continue to use a technology that disappoints again and again, even one that is deadly dangerous. Cars are a great example. Car accidents kill more people than wars, and yet we love cars.

Our willingness to suffer for the sake of the perception of freedom is remarkable.

ie: sending our kids to school for 7 hours a day, 12+ years.

..people have often respected bits too much, resulting in a creeping degradation of their own qualities as human beings.

software people know that it’s useless to continue to write tiny programs forever. to do anything useful, you have to take the painful plunge into large code.

deep enough, simple enough, open enough

i long to be shocked and be made obsolete by new generations of digital culture, but instead i am being tortured by repetition and boredom.

there’s a rule of thumb you can count on in each succeeding version of he web 2.0 movement: the more radical an online social experiment is claimed to be, the more conservative, nostalgic, and familiar the result will actually be.

..online culture is fixated on the world as it was before the web was born.

Freedom is moot if you waste it. If the internet is really destined to be no more than an ancillary medium, which I would view as a profound defeat, then it at least ought to do whatever it can not to bite the hand that feeds it—that is, it shouldn’t starve the commercial media industries.

a different song/dance/noise/silence – short

The new century is not yet set up to support its own culture.

What makes something fully real is that it is impossible to represent it to completion.

.. no single, precise idea of a note was ever a mandatory part of the process of making music until the early 1980s, when MIDI appeared. Certainly, various ideas about notes were used to notate music before then, as well as to teach and to analyze, but the phenomenon of music was bigger than the concept of a note.

Hip-hop is imprisoned within digital tools like the rest of us. But at least it bangs fiercely against the walls of its confinement.

A sample played again and again expresses stuckness and frustration, as does the regular beat. The inherent rigidity of software becomes a metaphor for an alienated modern life mired in urban poverty. A digital sound sample in angry rap doesn’t correspond to the graffiti but to the wall.

I have always wanted a simple thing, and the hive refuses to give it to me. I want both to encourage reuse of my music and to interact with the person who hopes to use some of my music in an aggregate work. I might not even demand an ability to veto that other person’s plans, but I want at least a chance at a connection. There are areas of life in which I am ready to ignore the desire for connection in exchange for cash, but if art is the focus, then interaction is what I crave. The whole point of making music for me is connecting with other people. Why should I have to give that up?

Individual voice—the opposite of wikiness—might not matter to mathematical truth, but it is the core of mathematical communication.

Keep in mind that smells are not patterns of energy, like images or sounds. To smell an apple, you physically bring hundreds or thousands of apple molecules into your body. You don’t smell the entire form; you steal a piece of it and look it up in your smell dictionary for the larger reference.

Jim has proposed that the way we think is fundamentally based in the olfactory. A smell is a synecdoche: a part standing in for the whole.

fractal ness

Once finches experienced the luxury of assured mating (provided they were visually attractive), their song variety exploded.

Likewise, much of the most expressive slang comes from people with limited formal education who are making creative use of the words they know.

If we had infinite brains, capable of using an infinite number of words, those words would mean nothing, because each one would have too specific a usage.

Separation anxiety is assuaged by constant connection. Young people announce every detail of their lives on services like Twitter not to show off, but to avoid the closed door at bedtime, the empty room, the screaming vacuum of an isolated mind.

While it is easy to think of neoteny as an emphasis on youthful qualities, which are in essence radical and experimental, when cultural neoteny is pushed to an extreme it implies conservatism, since each generation’s perspectives are preserved longer and made more influential as neoteny is extended. Thus, neoteny brings out contradictory qualities in culture.

What we have that they don’t have is neoteny. Our secret weapon is childhood.

I am trying to create a new way to make software that escapes the boundaries of preexisting symbol systems. This is my phenotropic project.

For me, the prospect of an entirely different notion of communication is more thrilling than a construction like the Singularity. Any gadget, even a big one like the Singularity, gets boring after a while. But a deepening of meaning is the most intense potential kind of adventure available to us.

Don’t think in terms of what you are and are not allowed to do. You are an adult. Think about the world you want and how to get there.

True democracy is hard to sustain if society doesn’t have a middle class. Under the current regime supported by the idealists, there’s almost no online middle class.

To be a person you have to find a sweet spot in which you both invent yourself and are real.

A life entirely without shadows cannot be real.

Recall that the motivation of the invention of the bit was to hide and contain the thermal, chaotic aspect of physical reality for a time in order to have a circumscribed, temporary, programmable nook within the larger theater. Outside of the conceit of the computer, reality cannot be fully known or programmed.

The usual remedy for “information overload,” the popular term for “cost of choice,” is that we need to create “intelligent” filter technology that will serve up only the best information for you, thus perpetuating the idea of a spy computer that knows what’s best for you. Aside from the resulting economic problems, another issue is that we don’t have adequate scientific understandings of meaning or thought, so we can’t actually write software that does the job. We can only pretend to write such software, so that you can pretend it’s working, thus lowering your standards and lessening your personhood.

I am sure that there are some people—a minority, it appears—using these tools more than they are being used by them.

_____________

2012 ish on big think – what it means to be human

http://bigthink.com/rewire/jaron-lanier-what-it-means-to-be-human

1\ lure into a regimentation scheme

2\ offer something beautiful that they love

on running on categories…decreases degree that we invent ourselves from scratch… excessive conformity is a soul killer

_____________

money\less

money ness

__________

nov 2017 – Soothsayer in the Hills Sees Silicon Valley’s Sinister Side

https://www.nytimes.com/2017/11/08/style/jaron-lanier-new-memoir.html?_r=0

video:

musical instruments are the best user interfaces ever existed.. so perfectly crafted for the human spirit/body/breath.. they are everything i want tech to be that it isn’t yet..

pursuing that deepening of communication.. not only interesting/beautiful.. also matter of survival.. fascination/race for new techs.. tends too often to lead to power games not sustainable.. instead.. techs that lead us into art.. that is sustainable.. never runs out..

write up:

The wild stories about Mr. Lanier’s coming-of-age come in a rush, from playing piano at the Ear Inn in SoHo and avant-garde parties with John Cage and Laurie Anderson, and working for the Ear magazine, where editors would have to go up to the Dakota regularly and beg for cash from John and Yoko; to breaking out Timothy Leary from the Esalen Institute in Big Sur, Calif., to a failed first marriage to a beautiful woman who had a roommate who kept tarantula venom in their refrigerator. (“Carved by trauma and tradition, her demons dragged my demons to the courthouse,” he writes of their divorce.)

“The whole internet thing was supposed to create the world’s best information resource in all of history,” he says. ..Everything is totally obscure in a profound way that it never was before.

“And the belief system of Silicon Valley is so thick that my friends at Facebook simply still really believe that the answer to any problem is to do more of what they already did, that they’re optimizing the world.

“The Facebook business model is mass behavior modification for pay.

“I think there are a lot of good people at Facebook, and I don’t think they’re evil as individuals. Or at least not the ones that I’ve met. And I know Google a lot better, and I feel pretty certain that they’re not evil. But both of these companies have this behavior-manipulation business plan, which is just not something the world can sustain at that scale. It just makes everything crazy.”

ah.. base flute. on wait list for book at library

_________

interview on book.. dawn new everything

https://www.theguardian.com/technology/2017/nov/12/jaron-lanier-book-dawn-new-everything-interview-virtual-reality

“I kind of coped with my mother’s death and lots of other things by putting them out of [my] mind,” he says. “Having to encounter that again was difficult. But I am unhappy with the way that digital technology is influencing the world, and I think the solution is to double down on being human…”, which leaves Lanier no choice but to put himself all the way into his book.

when he arrived in Silicon Valley he found like-minded twentysomethings among tech entrepreneurs and hackers: the children of commune dwellers and peace protesters, who had been brought up with the liberal regime of the childcare guru Dr Spock and who recognised no limits to imagination, and often ego. Counterculture fed directly into plutocratic tech culture.

One thread that seems to connect all of his preoccupations over the years is a restless effort to find new arenas in which to communicate with other people, as if the conventional ones are not enough.

‘Does a desire to communicate make me different from other people, or does it expose a commonality?’ I think it is more the latter. I think we all want something deeper.”

beyond words

maté basic needs

We’re at the end of our allotted hour on the phone and it seems a good point at which to close. Before he goes, Lanier says, pointedly, that he wants to note that we “haven’t really talked about virtual reality, which is the theme of my book…”

I’m surprised he thinks this, and I make noises about how I’m not a specialist and wouldn’t feel qualified to challenge him on the specifics of the science. But also, in my mind, I feel we have talked of little else.

___________

dec 2017 interview:

http://uk.businessinsider.com/jaron-lanier-interview-on-silicon-valley-culture-metoo-backlash-ai-and-the-future-2017-12?r=US&IR=T

Actually, from a purely selfish point of view, it does hurt me because I’m in this weird echo chamber where I’m being told ‘you’re a hacker, you’re a technical man, you’re a white man’” and it becomes this ongoing reinforcement where you’re that thing — but the thing is this total artificial bullshit classification that just happens to rise from the resonance of this stupid tool. So while I’m on the beneficial side of it, in some ways, it forces me into this box. I think this kind of thinking hurts everyone, even the people who appear to be the beneficiaries of it. They’re forced into a place where they can’t reach their full potential

none of us if one of us

marsh label law

There used to be this sense of an arc in history in which, if there was something that seemed like an injustice in society and people worked to improve it, there might be some backlash, but gradually it would improve. Now, what happens is that the backlash is greater than the original thing, and in some ways worse. .. the system inherently supports the negative people more than the positive people.

I don’t think it’s possible for us to do better unless we change the incentive structure. 

incentives ness

Right now, of the big five tech companies, three of them don’t rely on that [advertising] model. Whatever you think of Apple, Amazon, and Microsoft, they’re selling goods and services primarily..I’m totally convinced if companies like Google and Facebook can shift to a more monetized economy, then things will get better, simply because people participating will have some incentive to add to the attention economy, where they at least have something else to do, rather than just be assholes.

So Facebook would charge a fee. I’m sympathetic to a lot of people who say that young people or people in poverty couldn’t afford it. And sure, make some accommodation for that. But *in general, people will pay a small fee, but then they’d also have a chance to earn money. If someone is a super-contributor to a social network, if they’re really adding a lot of content, they should get paid for it.

*?

I think it will make things better because it will give people a different game to play in addition to seeking attention.

Using a technology a lot is not necessarily a bad thing, people use books a lot too. The mere use of it is not bad. When we talk about addiction, we should make it specific, and in the case of behavioral addiction, it’s really a noisy feedback loop. I do believe that these noisy feedback loops are dysfunctional, and they should not exist.

broken loop

I have a position that is both unusual and yet entirely correct. From my perspective, there isn’t any AI. AI is just computer engineering that we do. If you take any number of different algorithms and say, “Oh, this isn’t just some program that I’m engineering to do something, this is a person, it’s a separate entity,” it’s a story you’re telling. That fantasy really attracts a lot of people. And then you call it AI. As soon as you do that, it changes the story, it’s like you’re creating life. It’s like you’re God or something. I think it makes you a worse engineer, because if you’re saying that you’re creating this being, you have to defer to that being. You have to respect it, instead of treating it as a tool that you want to make as good as possible on your terms. The actual work of AI, the math and the actuators and sensors in robots, that stuff fascinates me, and I’ve contributed to it. I’m really interested in that stuff. There’s nothing wrong with that. It’s the mythology that’s creepy.

AI is a fantasy that you apply to things. The issue with AI is that we’re giving these artifacts we build so much respect that we’re not taking responsibility for them and designing them as well as possible.

The origin of this idea is with Alan Turing, and understanding Turing’s life is important to understanding that idea about AI because he came up with this notion of AI and the Turing test in the final weeks of his life, just before he killed himself while he was undergoing torture for his sexual identity. I don’t want to presume to know what was going on in Turing’s head, but it seems to me that if there’s this person who is being forced by the state to take these hormones that are essentially a form of torture, he’s probably already contemplating suicide or knows that he’ll commit suicide. And then he publishes this thing about how maybe computers and people are the same and puts it in the form of this Victorian parlor game. You look at it, and it’s a psycho-sexual drama, it’s a statement, a plea for help, a form of escape or a dream of a world where sexuality doesn’t matter so much, where you can just be.

turing

There are many ways to interpret it, but it’s clearly not just a straightforward, technical statement. For Turing, my sense is that his theory was a form of anguish. For other people, maybe it’s more like religion. If you change the words, you have the Catholic church again. The singularity is the rapture, you’re supposed to be a true believer, and if you’re not, you’re going to miss the boat and so on.

while reading dawn.. this from howard:

@BryanAlexander @bikehugger @rtanglao Lanier doesn’t understand diff bet collective action (voluntary, distributed governance) & collectivism (coerced, central control)

Original Tweet: https://twitter.com/hrheingold/status/946460813669277696

__________

interview w Ezra Klein via anne fb share

I enjoyed this podcast a great deal. Jaron Larnier has his finger on the pulse I think. Recommended.

https://www.vox.com/2018/1/16/16897738/jaron-lanier-interview

13 min – heartbreaking.. when people thinking there’s no alt

27 min – on acting individual and in packs..  i think that systems that bring out the individual rather than pack are better.. in packs.. the worst in people comes out.. so used to ie: advertising to get us to conform to packs

maté trump law

29 min – the current incentive structure is awful.. engagement.. so addiction

30 min – if incentive structure rewarded individuals… i think they’d be better

better if no rewards.. intrinsic rewards are what would lead to eudaimoniative surplus

49 min – w vr have a chance to feel your individuality in a diff way

__________
ted2018 –
[https://www.ted.com/talks/jaron_lanier_how_we_need_to_remake_the_internet]
we have to create a culture around tech.. that is so beautiful/meaningful deep endlessly creative.. filled w infinite potential that it draws us away from mass suicide
i still believe that creativity as alt to death is true
it would be something like when people discovered language.. new ways to.. imagine..
i imagined w vr.. this new thing.. like a convo.. but like a waking state intentional dreaming post symbolic communication.. directly making thing you experienced rather than indirectly making symbols to refer to things
lanier beyond words law
yet haunting that beautiful vision is the dark side of how it could turn out
norbert weiner – the human use of human beings – described potential to create a computer system that would be gathering data from people and feedback to them in real time.. paraphrase: a global computer system where every one has devices.. society would not survive.. would be insane.. but this is just a thought experiment and infeasible.. but.. it’s what we’ve created..
human use of humans
i believe we made a particular mistake.. and can undo it.. happened in 90s.. early digi culture.. to this day.. had a lefty socialist mission about it.. that unlike other things.. ie: must be free because even if one person can’t afford would be ineq..
we also believed in this incompatable thing.. we loved our entrepreneurs.. steve jobs.. that mythical power has hold on us..
two passions 1\ everything free 2\ supernatural power of entrepreneur.. how do you celebrate entrepreneurship if everything is free
well..you do it w ads.. and in beginning were very cute.. but got cleverer and cleverer.. can’t be adverstising anymore.. rather behavior modification..
can’t call social networks anymore.. call them social modification empires
i don’t think this is a matter of bad people who have done bad things.. i think it’s a global mistake rather than a wave of evil
w behaviorism.. give creature little treats/punishment as feedback..
here’s the thing.. traditionally in academic study of behaviorism.. in commercial setting.. new kind of difference.. positive or neg.. negs are cheaper.. the bargain stimuli.. much easier to lose than to build trust/love..
customers are on a very fast loop.. so responding more to neg’s.. so even well intentioned players advance cause of negative emotions.. get amplified by system..
the alt is to turn back the clock and remake that decision.. ie: 1\ many people pay for these things .. sometimes when you pay for stuff things get better.. ie: tv w netflix  .. i think googles et al would do better in this world.. we don’t need to punish sv..  only google and fb depend on surveillance.. and i love these people.. a win win solution
ugh

i don’t believe we can’t survive unless we fix this.. we cannot have a society in which if 2 people wish to communicate.. the only way that can happen is if it’s financed by a 3rd person who wishes to manipulate them

so.. let’s try another way to communicate..  2 convos that io dance.. as the day

as it could be..

in the mean time if the co’s won’t change .. delete your accounts

__________

glenn and jaron and matt and vint

There is a great deal of wisdom here. Depression is clearly up in many countries as part of the internet age, and while correlation isn’t causation, it seems clear part of it is causation. The internet promised greater connectedness but in many key ways it produced the opposite: https://t.co/hJDD1fcxN3

Original Tweet: https://twitter.com/ggreenwald/status/1008671140414476289

@ggreenwald I would distinguish between the internet and social media

Original Tweet: https://twitter.com/matthewstoller/status/1008674011189071872

begs we give ourselves a do over.. and use internet as it could be..

ie: 2 convos.. as infra

cc @vgcerf

_____________

march 2020 – AI is an Ideology, Not a Technology – by jaron lanier

Opinion: At its core, “artificial intelligence” is a perilous belief that fails to recognize the agency of humans. https://t.co/jPolulQEYy
Original Tweet: https://twitter.com/WIRED/status/1297799387222028288

the term “artificial intelligence” doesn’t delineate specific technological advances. A term like “nanotechnology” classifies technologies by referencing an objective measure of scale, while AI only references a subjective measure of tasks that we classify as intelligent.

intellect ness

If “AI” is more than marketing, then it might be best understood as one of a number of competing philosophies that can direct our thinking about the nature and use of computation.

A clear alternative to “AI” is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don’t talk about how a machine is learning to see. *Instead talk about how people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in a rigorous way for the first time. There’s always a second way to conceive of any situation in which AI is purported. This matters, because the AI way of thinking can distract from the responsibility of humans.

*need to go deeper.. and use ‘ai’ to detox/reconnect the humans – to undo our hierarchical listening ie: 2 convers as infra

Computation is an essential technology, but the AI way of thinking about it can be murky and dysfunctional.

yeah way deeper than computation

Regardless of how one sees it, an understanding of AI focused on independence from—rather than interdependence with—humans misses most of the potential for software technology.

yeah that

but then article goes into money

Supporting the philosophy of AI has burdened our economy.

To those who fear that bringing data collection into the daylight of acknowledged commerce will encourage a culture of ubiquitous surveillance, we must point out that it is the only alternative to such a culture. *It is only when workers are paid that they become citizens in full. **Workers who earn money also spend money where they choose; they gain deeper power and voice in society. They can gain the power to choose to work less, for instance. This is how worker conditions have improved historically

*wow – citizen ness is killing us

**double wow – not legit voice if geeing paid

yeah.. not so much.. we need to let go money (any form of measuring/accounting).. we can do that if we use tech/ai for augmenting interconnectedness

Virtual and augmented reality hold out the prospect of dramatically increasing what is possible, allowing more types of collaborative work to be performed at great distances. Productivity software from Slack to Wikipedia to LinkedIn to Microsoft product suites make previously unimaginable real-time collaboration omnipresent.

what we need is a means to get us back/to fittingness.. life/living/humanity/human-scale is not about productivity

But active engagement is possible only if, unlike in the usual AI attitude, all contributors, not just elite engineers, are considered crucial role players and are financially compensated.

that’s how to get/keep/imprison whales in sea world..

what we need is a means to set us all free

“AI” is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but *much of humanity.. t Given that any such replacement is a mirage, this ideology has strong resonances with other historical ideologies, such as technocracy and central-planning-based forms of socialism, which viewed as desirable or inevitable the replacement of most human judgement/agency with systems created by a small technical elite. It is thus not all that surprising that the Chinese Communist Party would find AI to be a welcome technological formulation of its own ideology.

*has to be all or it won’t work

The richest companies, individuals, and regions now tend to be the ones closest to the biggest data-gathering computers. Pluralistic visions of liberal *democratic market societies will lose out to AI-driven ones unless we reimagine the role of technology in human affairs.

we need to reimagine the role of tech in human affairs.. so that we also let go of thinking we want *democratic market societies..

Not only is this reimagination possible, it’s been increasingly demonstrated on a large scale in one of the places most under pressure from the AI-fueled CCP ideology, just across the Taiwan Strait. Under the leadership of Audrey Tang and her Sunflower and g0v movements, almost half of Taiwan’s population has joined a national participatory data-governance and -sharing platform that *allows citizens to self-organize the use of data,..t demand services in exchange for these data, deliberate thoughtfully on collective choices, and vote in innovative ways on civic questions.

not legit *self org-ing when based on non legit data.. rather.. they’re modeling self org-ing w/in finite set of choices (very similar to pilot math year)

huge diff – need to focus on legit data first – ie: self-talk as data

Driven neither by pseudo-capitalism based on barter nor by state planning,

but still driven by telling people what to do ness (ie: it’s engrained in us all that we need civic participation, collective org, et al.. rather than legit free people)

Taiwan’s citizens have built a culture of agency over their technologies through civic participation and collective organization, something we are starting to see emerge in Europe and the US through movements like data cooperatives. Most impressively, tools growing out of this approach have been critical to Taiwan’s best-in-the-world success at containing the Covid-19 pandemic, with only 49 cases to date in a population of more than 20 million at China’s doorstep.

To paraphrase Edmund Burke, all that is necessary for the triumph of an AI-driven, automation-based dystopia is that liberal democracy accept it as inevitable.

like accepting money (any form of measuring/accounting) as inevitable

___________

from kevin carson‘s thermidor of progressives:

35

he (lanier) is so obsessed w what james scott calls ‘legibility form above’ that he ignores the evolution of mechs for horizontal legibility.. lanier’s concern based on the avg person’s allege incompetence, in the absence of ‘professional’ intermediators, is the basis for widely shared cult of professionalism..

____________

the social dilemma (doc)

_____________

_____________

____________

Advertisement