wikipedia small

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an academic field of study which studies the goal of creating intelligence. Major AI researchers and textbooks define this field as “the study and design of intelligent agents”, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.

AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is still among the field’s long term goals. …

The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—”can be so precisely described that a machine can be made to simulate it.” This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.



wikipedia small

Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as “strong AI“, “full AI” or as the ability to perform “general intelligent action”.


Artificial general intelligence research

Artificial general intelligence (AGI) describes research that aims to create machines capable of general intelligent action. The term was introduced by Mark Gubrud in 1997 in a discussion of the implications of fully automated military production and operations. ……. As yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in “The Singularity is Near” (i.e. between 2015 and 2045) is plausible. Most mainstream AI researchers doubt that progress will be this rapid. Organizations actively pursuing AGI include Adaptive AI, the Machine Intelligence Research Institute, the OpenCog Foundation, Bitphase AI, TexAI., Numenta and the associated Redwood Neuroscience Institute, and AND Corporation.

Ray Kurzweil

Ada Lovelace

Alan Turing

Ben Goertzel

Roger Schank

open ai

ai for peace – Tomi Honkela

augmenting interconnectedness

Kai-Fu Lee


first recollection of seeing this as something (humane).. when Bernd suggested i look into Monica‘s work.

Maurice Conti

tech aug


adding page while (still) reading through this article:

Jason Silva (@JasonSilva)”the ability to create new explanations is the unique, morally & intellectually significant functionality of people”…
‘it ain’t what we don’t know that causes trouble, it’s what we know for sure that just ain’t so’ (and if you know that sage was Mark Twain, then what you know ain’t so either).
Yet that would have achieved nothing except an increase in the error rate, due to increased numbers of glitches in the more complex machinery. Similarly, the humans, given different instructions but no hardware changes, would have been capable of emulating every detail of the
what if increase error is needed
Similarly, the humans, given different instructions but no hardware changes, would have been capable of emulating every detail of the Difference Engine’s method — and doing so would have been just as perverse. It would not have copied the Engine’s main advantage, its accuracy, which was due to hardware not software. I
again… not human.. no?
Experiencing boredom was one of many cognitive tasks at which the Difference Engine would have been hopelessly inferior to humans
Nor was it capable of knowing or proving, as Babbage did, that the two algorithms would give identical results if executed accurately. Still less was it capable of wanting, as he did, to benefit seafarers and humankind in general.
But it is the other camp’s basic mistake that is responsible for the lack of progress. It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program. Nor can it be achieved merely by improving their performance at tasks that they currently do perform, no matter by how muc

Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism,unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values —the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished. As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow? Because it is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways

Furthermore, despite the above-mentioned enormous variety of things that we create explanations about, our core method of doing so, namely Popperian conjecture and criticism, has a single, unified, logic. Hence the term ‘general’ in AGI. A computer program either has that yet-to-be-fully-understood logic, in which case it can perform human-type thinking about anything, including its own thinking and how to improve it, or it doesn’t, in which case it is in no sense an AGI. Consequently, another hopeless approach to AGI is to start from existing knowledge of how to program specific tasks — such as playing chess, performing statistical analysis or searching databases — and then to try to improve those programs in the hope that this will somehow generate AGI as a side effect, as happened to Skynet in the Terminator films.
Nowadays, an accelerating stream of marvellous and useful functionalities for computers are coming into use, some of them sooner than had been foreseen even quite recently. But what is neither marvellous nor useful is the argument that often greets these developments, that they are reaching the frontiers of AGI. An especially severe outbreak of this occurred recently when a search engine called Watson, developed by IBM, defeated the best human player of a word-association database-searching game called Jeopardy. ‘Smartest machine on Earth’, the PBS documentary series Nova called it, and characterised its function as ‘mimicking the human thought process with software.’ But that is precisely what it does not do.
The thing is, playing Jeopardy — like every one of the computational functionalities at which we rightly marvel today — is firmly among the functionalities that can be specified in the standard, behaviourist way that I discussed above. No Jeopardy answer will ever be published in a journal of new discoveries. The fact that humans perform that task less well by using creativity to generate the underlying guesses is not a sign that the program has near-human cognitive abilities. The exact opposite is true, for the two methods are utterly different from the ground up. Likewise, when a computer program beats a grandmaster at chess, the two are not using even remotely similar algorithms. The grandmaster can explain why it seemed worth sacrificing the knight for strategic advantage and can write an exciting book on the subject. The program can only prove that the sacrifice does not force a checkmate, and cannot write a book because it has no clue even what the objective of a chess game is. Programming AGI is not the same sort of problem as programming Jeopardy or chess.
This does not surprise people in the first camp, the dwindling band of opponents of the very possibility of AGI. But for the people in the other camp (the AGI-is-imminent one) such a history of failure cries out to be explained — or, at least, to be rationalised away. And indeed, unfazed by the fact that they could never induce such rationalisations from experience as they expect their AGIs to do, they have thought of many.
AGIs will indeed be capable of self-awareness — but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves. This does not mean that apes who pass the mirror test have any hint of the attributes of ‘general intelligence’ of which AGI would be an artificial version. Indeed, Richard Byrne’s wonderful research into gorilla memes has revealed how apes are able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is behaviouristic.

posted on fb by jean russell
Good look at how #AI is demolishing language barriers. #peacetech in the making.
begs we approach the limit of 7 bn idiosyncratic jargons.. new every day..

Long, but valuable read: everything you need to know about machine learning & #AI for 2017: via @nytimes

When he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this.
If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.
There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules). This notion dates to the early 1940s, when it occurred to researchers that the best model for flexible automated intelligence was the brain itself. A brain, after all, is just a bunch of widgets, called neurons, that either pass along an electrical charge to their neighbors or don’t. What’s important are less the individual neurons themselves than the manifold connections among them. This structure, in its simplicity, has afforded the brain a wealth of adaptive advantages. The brain can operate in circumstances in which information is poor or missing; it can withstand significant damage without total loss of control; it can store a huge amount of knowledge in a very efficient way; it can isolate distinct patterns but retain the messiness necessary to handle ambiguity.
If you have millions of different voters that can associate in billions of different ways, you can learn to classify data with *incredible granularity
*ginormous small ness.. like the potential of hosting-life-bits via self-talk as data.. aka: approaching the limit of 7 bn idiosyncratic jargons.. new every day..
become indigenous
from Roger:


Cognitive computing is not cognitive at all » Banking Technology…

People learn from conversation and Google can’t have one.

what computers can’t do .. ai.. algo .. ness..

i’d suggest ai as:

augmenting interconnectedness


from Adam Greenfield‘s radical techs:


ch 9 – artificial intelligence – the eclipse of human discretion


often the researchers involved have displayed a lack of curiosity for any form of intelligence beyond that they recognized in themselves.. and a marked lack of appreciation for the actual depth and variety of human talent. .. t

project to develop ai has very often nurtured a special kind of stupidity in some of its most passionate supporters – a particular sort of arrogant ignorance that only afflicts those of high intellect..

as reach teaching machine to think.. no longer thought of as ai.. which is progressively redefined as something perpetually out of reach

rt yesterday:

Andrew Ng (@AndrewYNg) tweeted at 12:04 PM – 1 May 2017 :

If you’re trying to understand AI’s near-term impact, don’t think “*sentience.” Instead think “automation on steroids.” (

*Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience).


hoping.. there are some creative tasks technical systems will simply never be able to perform

the essence of learning, though, whether human or machinic, is developing the ability to detect, recognize and eventually reproduce patterns. and what poses problems for this line of argument (or hope, whichever it may be) is that many if not all of the greatest works of art – the things we regard as occupying the very pinnacle of human aspiration and achievement – consist of little other than patterns… rich/varied.. but nothing magical..

the humanist in me recoils at what seems like the brute-force reductionism of statements like this, but beyond some ghostly ‘inspiration’ it’s hard to distinguish what constitutes style other than habitual arrangements, whether those be palettes, chord progressions, or frequencies of word use and sentence structure. and these are just the sort of feature that are ready-made for extraction via algorithm.

habitual arrangements.. chord progressions.. on composing/orchestration ness – ben folds composes in 10 min – orchestra knows what to play – plays in sync


everyone will have their own fav ies of an art that seems as if it must transcend reduction. for me, it’s the oval phrasing of nina simone.. the ache and steel of life in her voice..


the notion that everything i hear might be flattened to a series of instructions and executed by machine..


‘to extract the features that make rembrandt rembrandt..’ for algo’d.. next rembrandt


alphago isn’t just one thing, but a stack of multiple kinds of neural network and learning algorithm laminated together


deep blue ..a special purpose engine exquisitely optimized for – and there fore completely useless at anything other than – the rules of chess.. alphgo is a general learning machine..


.. simply brute force. that may well have been how deep blue beat kasparov. it is not how alphago defeated lee sedol.. for many i suspect, next rembrandt will feel like a more ominous development than alphago


constructed bushido is unquestionably something that resides in the human heart, or does not…this matters when we describe a machine, however casually, as possessing this spirit.


points toward a time when just about any human skill can be mined for its implicit rules and redefined as an exercise in pattern recognition and reproduction, even those seemingly most dependent on soulful improvisation

improv\e and algo ness


what we now confront is the possibility of machines transcending our definitions of mastery,pushing outward into an enormously expanded envelope of performance..t

so maybe the issue is.. with mastery and performance.. maybe those things aren’t the soul of a humanity.. and so.. they are able to be algo’d.. but don’t rep/define us

lee sedo: i’t not a human move. i’ve never seen a human play this move. so beautiful.

so to with flying.. et al.. doesn’t mean it’s more human.. means it’s augmenting a human/animal performance..

the ai player, *unbound by the structural limitations, the conventions of taste or the inherent prejudices of human play, explore fundamentally different pathways – and again, there’s an aesthetic component to the sheer otherness of its thought.. t

thinking *this is what we need ai ness to do.. to get us back to us.. as a means to listen to each of us.. everyday.. w/o agenda/judgment.. et al..


i don’t know what it will feel like to be human in that posthuman moment. i don’t think any of us truly do. any advent of an autonomous intelligence greater than our own can only be something like a divide-by-zero operation performed on all our ways of weighing the world, introducing a factor of infinity into a calculus that isn’t capable of containing it..t

i don’t know.. maybe it’s that ability to divide by zero that will blur our mathematical lines/assumptions of what it means to know things.. even what it means to be human

eagle and condor ness


Roger Schank (@rogerschank) tweeted at 6:43 AM – 10 Jan 2018 :

BBC News – CES 2018: When will AI deliver for humans? thanks for a truthful AI article @BBCRoryCJ (


Clive Thompson (@pomeranian99) tweeted at 5:21 AM – 2 Mar 2018 :

Google created in-house training to teach staff machine learning; they’ve now turned it into publicly-available online tutorials —


via céline keller rt

Joanna J Bryson (@j2bryson) tweeted at 6:14 AM – 27 May 2018 :

@krustelkram There are simple & well established defs of intelligence, but while people confound it with „humanlike“ they ignore dictionaries. I’m working on a book, but meantime (

Johannes Klingebiel (@Klingebeil) tweeted at 6:16 AM – 27 May 2018 :
@krustelkram I always refer to this article, because it nicely sums up the problem with “AI” (

no one is quite sure what the phrase even means.

So let’s not continue down this path by referring to these problem-solving, pattern-recognizing machines “artificial intelligence.” We’re just building tools like we’ve always done, and acting as agents in the exciting process of cognitive evolution.


Roger Schank (@rogerschank) tweeted at 5:21 AM – 25 Jul 2018 :

‘The discourse is unhinged’: how the media gets AI alarmingly wrong (

While the giddy hype around AI helped generate funding for researchers at universities and in the military, by the end of the 1960s it was becoming increasingly obvious to many AI pioneers that they had grossly underestimated the difficulty of simulating the human brain in machines. In 1969, Marvin Minsky, who had pronounced only eight years earlier that machines would surpass humans in general intelligence in his lifetime, co-authored a book with Seymour Papert proving that Rosenblatt’s perceptron could not do as much the experts had once promised and was nowhere near as intelligent as the media had let on.

minsky .. papert

Minsky and Papert’s book suffused the research community with a contagious doubt that spread to other fields, leading the way for an outpouring AI myth debunking. In 1972, the philosopher Hubert Dreyfus published an influential screed against thinking machines called What Computers Can’t Do, and a year later the British mathematician James Lighthill produced a report on the state of machine intelligence, which concluded that “in no part of the field have the discoveries made so far produced the major impact that was then promised

what computers can’t do

What Lipton finds most troubling, though, is not technical illiteracy among journalists, but how social media has allowed self-proclaimed “AI influencers” who do nothing more than paraphrase Elon Musk on their Medium blogs to cash in on this hype with low-quality, TED-style puff pieces.

“If you compare a journalist’s income to an AI researcher’s income,” she says, “it becomes pretty clear pretty quickly why it is impossible for journalists to produce the type of carefully thought through writing that researchers want done about their work.” She adds that while many researchers stand to benefit from hype, as a writer who wants to critically examine these technologies, she only suffers from it.



“we’re starting to see physiognomy and phrenology get a rerun in AI research”
-great @katecrawford lecture on bias and machine learning

Original Tweet:

22 min – machine learning making more and more decisions.. and biased..t

even deeper – we need to wake us all up.. detox us.. first.. ie: the data/people we’re worried about discriminating against.. aren’t themselves.. so data is not only mis algo’d.. it’s not-us to begin with

ie: black science of people/whales

24 min – who’s idea of neutrality is at work here..t

doesn’t really matter if we’re looking at whales in sea world..

we have to going deeper..

ie: ai as augmenting our interconnectedness in order to get what we need most: the energy of 7bn alive people

30 min – what if a deeper harm than just bias..t

yeah that..

33 min – this confusion of treat social ways of being as though they are fixed objects.. real classificatory harm..t

marsh label law

not only that.. deeper.. again.. we’re labeling whales in sea world.. and assuming that’s their true nature

34 min – we have an ethical obligation to not do things that are scientifically questionable.. that could cause serious harm.. and further marginalize groups..

way deeper than if someone is gay..

38 min – one of biggest challenges of next decade.. the social implication of ai..t

in a good way..  if we go for augmenting interconnectedness

41 min – what if we asked.. what kind of world do we want.. and then.. what kind of tech could drive that..t

let’s go for augmenting interconnectedness.. tech as it could be..


roger on if ai can do tacit knowledge

@hjarche of course its possible; but modern AI wants to do that by counting and pattern matching; to do it requires understanding how knowledge is acquired; we aren’t there yet
Original Tweet:


Co.Design (@FastCoDesign) tweeted at 6:51 AM – 24 Sep 2018 :
The exploitation, injustice, and waste powering our AI (

It’s not just the miners: It’s also the humans operating the gigantic global shipping and manufacturing apparatus that brings each piece of the puzzle together, it’s the click-workers who label and sort vast data sets on which to train AI, and it’s you, the user, who is simultaneously acting as “a consumer, a resource, a worker, and a product,” as Crawford and Joler write in the essay. Through this lens, Echo’s complex processing becomes a story of human work and–more disturbingly–human exploitation. A child laborer in the mines of the Congo would need to work for 700,000 years without stopping to accumulate the kind of capital that Amazon CEO Jeff Bezos makes per day. “At every level contemporary technology is deeply rooted in and running on the exploitation of human bodies,” Crawford and Joler write in the essay.


let’s talk about AI because its non-existence poses so many nonsense questions

Original Tweet:


neotene (@ctrlcreep) tweeted at 1:23 PM – 13 Feb 2018 :
I’m a local maximum engineer! When artificial intelligences threaten humanity, I build little worlds that satisfy their utility functions, trapping them in programmed bliss, harmless cycles of hedonism (


ai and creativity

John Hagel (@jhagel) tweeted at 6:11 AM – 1 May 2019 :
Sociology professor Anton Oleinik argues that neural networks are structured in a way that limits the possibility that they will ever have true artificial creativity (


ℳąhą Bąℓi, PhD مها بالي  (@Bali_Maha) tweeted at 5:43 AM – 9 Jun 2019 :
This is among the *best* critique of AI/analytics I have ever read @14prinsp @KateMfD @Czernie @gsiemens

article by @danmcquillan Dan McQuillan

Machine learning extends bureaucracy into the future;
or rather, it bureaucratises a probabilistic future and actualises it in the present.

too much ness

A human-in-the-loop is not a humanistic pushback
as that human is themselves subsumed by the institution-in-the-loop.

broken feedback loop

They (people’s councils) are a collective questioning
of the decisions that define the way the machines will make decisions,
by applying critical pedagogy and situated knowledge.

They constitute a different subjectivity –
iterative deliberation of consensus, done right,
is an antidote to bureaucracy and to the calculative iterations of machine learning

public consensus always oppresses someone(s)..

let’s listen to curiosity first.. consensus become irrelevant

We need to develop a different order of ordering.
Instead of ways of organising that allow everyone to evade responsibility,
we need to reclaim our own agency through self-organisation..t

2 convers as infra

We need to think collectively about ways out of this mess,
learning from and with each other rather than relying on machine learning.
countering thoughtlessness with practices of collective care.

listen to and facil daily curiosity  ie: cure ios city

We can’t uninvent either AI or bureaucracy,
but we can choose to radically change both our modes of organisation
and our approach to computational learning.


RT @instigating: How #AI & tech are reshaping daily #urban life. #cities #design @ArchDaily
Original Tweet:

How Artificial Intelligence Will Shape Design by 2050 – april 2020 by eric baldwin

 imagining how AI can shape our lives for the better.

yeah.. let’s do that.. ai humanity needs: augmenting interconnectedness

Artificial intelligence is broadly defined as the theory and development of computer systems to perform tasks that normally require human intelligence. The term is often applied to a machine or system’s ability to reason, discover meaning, generalize, or learn from past experience. Today, AI already utilizes algorithms to suggest what we should see, read, and listen to, and these systems have extended to everyday tasks like suggested travel routes, autonomous flying, optimized farming, and warehousing and logistic supply chains. Even if we are not aware of it, we are already feeling the effects of AI adoption.

maybe we just need machines to listen w/o judgment.. and then use that data to connect us locally.. everyday

ie: 2 convers as infra.. via tech/ai as it could be..

the act of following trend-lines to possible conclusions and imagining how we might live is a productive exercise.. t

yeah.. i think it’s more of a cancer.. to follow trend lines.. ie: just perpetuating our whales in sea world ness

thinking we need to be productive itself is perpetuating that illness

AI can further analyze and monitor how we move about the city, work together, and unwind.

we don’t need to be analyzed/monitored.. we just need to be listened to .. w/o judgement.. and then use that data to augment our interconnectedness


march 2020 – AI is an Ideology, Not a Technology – by jaron lanier

Opinion: At its core, “artificial intelligence” is a perilous belief that fails to recognize the agency of humans.
Original Tweet:

the term “artificial intelligence” doesn’t delineate specific technological advances. A term like “nanotechnology” classifies technologies by referencing an objective measure of scale, while AI only references a subjective measure of tasks that we classify as intelligent.

intellect ness

If “AI” is more than marketing, then it might be best understood as one of a number of competing philosophies that can direct our thinking about the nature and use of computation.

A clear alternative to “AI” is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don’t talk about how a machine is learning to see. *Instead talk about how people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in a rigorous way for the first time. There’s always a second way to conceive of any situation in which AI is purported. This matters, because the AI way of thinking can distract from the responsibility of humans.

*need to go deeper.. and use ‘ai’ to detox/reconnect the humans – to undo our hierarchical listening ie: 2 convers as infra

Computation is an essential technology, but the AI way of thinking about it can be murky and dysfunctional.

yeah way deeper than computation

Regardless of how one sees it, an understanding of AI focused on independence from—rather than interdependence with—humans misses most of the potential for software technology.

yeah that

but then article goes into money

Supporting the philosophy of AI has burdened our economy.

To those who fear that bringing data collection into the daylight of acknowledged commerce will encourage a culture of ubiquitous surveillance, we must point out that it is the only alternative to such a culture. *It is only when workers are paid that they become citizens in full. **Workers who earn money also spend money where they choose; they gain deeper power and voice in society. They can gain the power to choose to work less, for instance. This is how worker conditions have improved historically

*wow – citizen ness is killing us

**double wow – not legit voice if geeing paid

yeah.. not so much.. we need to let go money (any form of measuring/accounting).. we can do that if we use tech/ai for augmenting interconnectedness

Virtual and augmented reality hold out the prospect of dramatically increasing what is possible, allowing more types of collaborative work to be performed at great distances. Productivity software from Slack to Wikipedia to LinkedIn to Microsoft product suites make previously unimaginable real-time collaboration omnipresent.

what we need is a means to get us back/to fittingness.. life/living/humanity/human-scale is not about productivity

But active engagement is possible only if, unlike in the usual AI attitude, all contributors, not just elite engineers, are considered crucial role players and are financially compensated.

that’s how to get/keep/imprison whales in sea world..

what we need is a means to set us all free

“AI” is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but *much of humanity. .t Given that any such replacement is a mirage, this ideology has strong resonances with other historical ideologies, such as technocracy and central-planning-based forms of socialism, which viewed as desirable or inevitable the replacement of most human judgement/agency with systems created by a small technical elite. It is thus not all that surprising that the Chinese Communist Party would find AI to be a welcome technological formulation of its own ideology.

*has to be all or it won’t work

The richest companies, individuals, and regions now tend to be the ones closest to the biggest data-gathering computers. Pluralistic visions of liberal *democratic market societies will lose out to AI-driven ones unless we reimagine the role of technology in human affairs.

we need to reimagine the role of tech in human affairs.. so that we also let go of thinking we want *democratic market societies..

Not only is this reimagination possible, it’s been increasingly demonstrated on a large scale in one of the places most under pressure from the AI-fueled CCP ideology, just across the Taiwan Strait. Under the leadership of Audrey Tang and her Sunflower and g0v movements, almost half of Taiwan’s population has joined a national participatory data-governance and -sharing platform that *allows citizens to self-organize the use of data,..t demand services in exchange for these data, deliberate thoughtfully on collective choices, and vote in innovative ways on civic questions.

not legit *self org-ing when based on non legit data.. rather.. they’re modeling self org-ing w/in finite set of choices (very similar to pilot math year)

huge diff

Driven neither by pseudo-capitalism based on barter nor by state planning,

but still driven by telling people what to do ness (ie: it’s engrained in us all that we need civic participation, collective org, et al.. rather than legit free people)

Taiwan’s citizens have built a culture of agency over their technologies through civic participation and collective organization, something we are starting to see emerge in Europe and the US through movements like data cooperatives. Most impressively, tools growing out of this approach have been critical to Taiwan’s best-in-the-world success at containing the Covid-19 pandemic, with only 49 cases to date in a population of more than 20 million at China’s doorstep.

To paraphrase Edmund Burke, all that is necessary for the triumph of an AI-driven, automation-based dystopia is that liberal democracy accept it as inevitable.

like accepting money (any form of measuring/accounting) as inevitable


cory on ai