Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an academic field of study which studies the goal of creating intelligence. Major AI researchers and textbooks define this field as “the study and design of intelligent agents”, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.
AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is still among the field’s long term goals. …
The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—”can be so precisely described that a machine can be made to simulate it.” This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.
Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as “strong AI“, “full AI” or as the ability to perform “general intelligent action”.
Artificial general intelligence research
Artificial general intelligence (AGI) describes research that aims to create machines capable of general intelligent action. The term was introduced by Mark Gubrud in 1997 in a discussion of the implications of fully automated military production and operations. ……. As yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in “The Singularity is Near” (i.e. between 2015 and 2045) is plausible. Most mainstream AI researchers doubt that progress will be this rapid. Organizations actively pursuing AGI include Adaptive AI, the Machine Intelligence Research Institute, the OpenCog Foundation, Bitphase AI, TexAI., Numenta and the associated Redwood Neuroscience Institute, and AND Corporation.
ai for peace – Tomi Honkela
adding page while (still) reading through this article:
Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism,unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values —the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished. As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow? Because it is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways
@PeaceTechLabWhen he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this.[..]If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.[..]There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules). This notion dates to the early 1940s, when it occurred to researchers that the best model for flexible automated intelligence was the brain itself. A brain, after all, is just a bunch of widgets, called neurons, that either pass along an electrical charge to their neighbors or don’t. What’s important are less the individual neurons themselves than the manifold connections among them. This structure, in its simplicity, has afforded the brain a wealth of adaptive advantages. The brain can operate in circumstances in which information is poor or missing; it can withstand significant damage without total loss of control; it can store a huge amount of knowledge in a very efficient way; it can isolate distinct patterns but retain the messiness necessary to handle ambiguity.[..]If you have millions of different voters that can associate in billions of different ways, you can learn to classify data with *incredible granularity
Cognitive computing is not cognitive at all » Banking Technology bankingtech.com/829352/cogniti…
People learn from conversation and Google can’t have one.
i’d suggest ai as:
ch 9 – artificial intelligence – the eclipse of human discretion
often the researchers involved have displayed a lack of curiosity for any form of intelligence beyond that they recognized in themselves.. and a marked lack of appreciation for the actual depth and variety of human talent. .. t
project to develop ai has very often nurtured a special kind of stupidity in some of its most passionate supporters – a particular sort of arrogant ignorance that only afflicts those of high intellect..
as reach teaching machine to think.. no longer thought of as ai.. which is progressively redefined as something perpetually out of reach
Andrew Ng (@AndrewYNg) tweeted at 12:04 PM – 1 May 2017 :
If you’re trying to understand AI’s near-term impact, don’t think “*sentience.” Instead think “automation on steroids.” (http://twitter.com/AndrewYNg/status/859106360662806529?s=17)
*Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience).
hoping.. there are some creative tasks technical systems will simply never be able to perform
the essence of learning, though, whether human or machinic, is developing the ability to detect, recognize and eventually reproduce patterns. and what poses problems for this line of argument (or hope, whichever it may be) is that many if not all of the greatest works of art – the things we regard as occupying the very pinnacle of human aspiration and achievement – consist of little other than patterns… rich/varied.. but nothing magical..
the humanist in me recoils at what seems like the brute-force reductionism of statements like this, but beyond some ghostly ‘inspiration’ it’s hard to distinguish what constitutes style other than habitual arrangements, whether those be palettes, chord progressions, or frequencies of word use and sentence structure. and these are just the sort of feature that are ready-made for extraction via algorithm.
habitual arrangements.. chord progressions.. on composing/orchestration ness – ben folds composes in 10 min – orchestra knows what to play – plays in sync
everyone will have their own fav ies of an art that seems as if it must transcend reduction. for me, it’s the oval phrasing of nina simone.. the ache and steel of life in her voice..
the notion that everything i hear might be flattened to a series of instructions and executed by machine..
‘to extract the features that make rembrandt rembrandt..’ for algo’d.. next rembrandt
alphago isn’t just one thing, but a stack of multiple kinds of neural network and learning algorithm laminated together
deep blue ..a special purpose engine exquisitely optimized for – and there fore completely useless at anything other than – the rules of chess.. alphgo is a general learning machine..
.. simply brute force. that may well have been how deep blue beat kasparov. it is not how alphago defeated lee sedol.. for many i suspect, next rembrandt will feel like a more ominous development than alphago
constructed bushido is unquestionably something that resides in the human heart, or does not…this matters when we describe a machine, however casually, as possessing this spirit.
points toward a time when just about any human skill can be mined for its implicit rules and redefined as an exercise in pattern recognition and reproduction, even those seemingly most dependent on soulful improvisation
what we now confront is the possibility of machines transcending our definitions of mastery,pushing outward into an enormously expanded envelope of performance..t
so maybe the issue is.. with mastery and performance.. maybe those things aren’t the soul of a humanity.. and so.. they are able to be algo’d.. but don’t rep/define us
lee sedo: i’t not a human move. i’ve never seen a human play this move. so beautiful.
so to with flying.. et al.. doesn’t mean it’s more human.. means it’s augmenting a human/animal performance..
the ai player, *unbound by the structural limitations, the conventions of taste or the inherent prejudices of human play, explore fundamentally different pathways – and again, there’s an aesthetic component to the sheer otherness of its thought.. t
thinking *this is what we need ai ness to do.. to get us back to us.. as a means to listen to each of us.. everyday.. w/o agenda/judgment.. et al..
i don’t know what it will feel like to be human in that posthuman moment. i don’t think any of us truly do. any advent of an autonomous intelligence greater than our own can only be something like a divide-by-zero operation performed on all our ways of weighing the world, introducing a factor of infinity into a calculus that isn’t capable of containing it..t
i don’t know.. maybe it’s that ability to divide by zero that will blur our mathematical lines/assumptions of what it means to know things.. even what it means to be human
eagle and condor ness
Roger Schank (@rogerschank) tweeted at 6:43 AM – 10 Jan 2018 :
BBC News – CES 2018: When will AI deliver for humans? https://t.co/85gypk6ffP thanks for a truthful AI article @BBCRoryCJ (http://twitter.com/rogerschank/status/951087127688830976?s=17)
Clive Thompson (@pomeranian99) tweeted at 5:21 AM – 2 Mar 2018 :
Google created in-house training to teach staff machine learning; they’ve now turned it into publicly-available online tutorials — https://t.co/k7JNiJ3QOh(http://twitter.com/pomeranian99/status/969548191258546176?s=17)
via céline keller rt
Joanna J Bryson (@j2bryson) tweeted at 6:14 AM – 27 May 2018 :
@krustelkram There are simple & well established defs of intelligence, but while people confound it with „humanlike“ they ignore dictionaries. I’m working on a book, but meantime https://t.co/q2Yj92fZoE (http://twitter.com/j2bryson/status/1000711830162038785?s=17)
Johannes Klingebiel (@Klingebeil) tweeted at 6:16 AM – 27 May 2018 :
@krustelkram I always refer to this article, because it nicely sums up the problem with “AI” https://t.co/NtY6C208ct (http://twitter.com/Klingebeil/status/1000712267665694720?s=17)
no one is quite sure what the phrase even means.
So let’s not continue down this path by referring to these problem-solving, pattern-recognizing machines “artificial intelligence.” We’re just building tools like we’ve always done, and acting as agents in the exciting process of cognitive evolution.
Roger Schank (@rogerschank) tweeted at 5:21 AM – 25 Jul 2018 :
‘The discourse is unhinged’: how the media gets AI alarmingly wrong https://t.co/OlFLUaLQJ4 (http://twitter.com/rogerschank/status/1022079344016060417?s=17)
While the giddy hype around AI helped generate funding for researchers at universities and in the military, by the end of the 1960s it was becoming increasingly obvious to many AI pioneers that they had grossly underestimated the difficulty of simulating the human brain in machines. In 1969, Marvin Minsky, who had pronounced only eight years earlier that machines would surpass humans in general intelligence in his lifetime, co-authored a book with Seymour Papert proving that Rosenblatt’s perceptron could not do as much the experts had once promised and was nowhere near as intelligent as the media had let on.
Minsky and Papert’s book suffused the research community with a contagious doubt that spread to other fields, leading the way for an outpouring AI myth debunking. In 1972, the philosopher Hubert Dreyfus published an influential screed against thinking machines called What Computers Can’t Do, and a year later the British mathematician James Lighthill produced a report on the state of machine intelligence, which concluded that “in no part of the field have the discoveries made so far produced the major impact that was then promised
What Lipton finds most troubling, though, is not technical illiteracy among journalists, but how social media has allowed self-proclaimed “AI influencers” who do nothing more than paraphrase Elon Musk on their Medium blogs to cash in on this hype with low-quality, TED-style puff pieces.
“If you compare a journalist’s income to an AI researcher’s income,” she says, “it becomes pretty clear pretty quickly why it is impossible for journalists to produce the type of carefully thought through writing that researchers want done about their work.” She adds that while many researchers stand to benefit from hype, as a writer who wants to critically examine these technologies, she only suffers from it.
“we’re starting to see physiognomy and phrenology get a rerun in AI research”
-great @katecrawford lecture on bias and machine learning https://t.co/W10Q34s2XJ
Original Tweet: https://twitter.com/nathanjurgenson/status/1026467443588382721
22 min – machine learning making more and more decisions.. and biased..t
even deeper – we need to wake us all up.. detox us.. first.. ie: the data/people we’re worried about discriminating against.. aren’t themselves.. so data is not only mis algo’d.. it’s not-us to begin with
24 min – who’s idea of neutrality is at work here..t
doesn’t really matter if we’re looking at whales in sea world..
we have to going deeper..
30 min – what if a deeper harm than just bias..t
33 min – this confusion of categories..to treat social ways of being as though they are fixed objects.. real classificatory harm..t
not only that.. deeper.. again.. we’re labeling whales in sea world.. and assuming that’s their true nature
34 min – we have an ethical obligation to not do things that are scientifically questionable.. that could cause serious harm.. and further marginalize groups..
way deeper than if someone is gay..
38 min – one of biggest challenges of next decade.. the social implication of ai..t
in a good way.. if we go for augmenting interconnectedness
41 min – what if we asked.. what kind of world do we want.. and then.. what kind of tech could drive that..t
@hjarche of course its possible; but modern AI wants to do that by counting and pattern matching; to do it requires understanding how knowledge is acquired; we aren’t there yet
Original Tweet: https://twitter.com/rogerschank/status/1034181779106791424
Co.Design (@FastCoDesign) tweeted at 6:51 AM – 24 Sep 2018 :
The exploitation, injustice, and waste powering our AI https://t.co/5sl3UvFHGM (http://twitter.com/FastCoDesign/status/1044207682595491840?s=17)
It’s not just the miners: It’s also the humans operating the gigantic global shipping and manufacturing apparatus that brings each piece of the puzzle together, it’s the click-workers who label and sort vast data sets on which to train AI, and it’s you, the user, who is simultaneously acting as “a consumer, a resource, a worker, and a product,” as Crawford and Joler write in the essay. Through this lens, Echo’s complex processing becomes a story of human work and–more disturbingly–human exploitation. A child laborer in the mines of the Congo would need to work for 700,000 years without stopping to accumulate the kind of capital that Amazon CEO Jeff Bezos makes per day. “At every level contemporary technology is deeply rooted in and running on the exploitation of human bodies,” Crawford and Joler write in the essay.
let’s talk about AI because its non-existence poses so many nonsense questions https://t.co/j4ewP10MZl
Original Tweet: https://twitter.com/rogerschank/status/1052207120337244161
neotene (@ctrlcreep) tweeted at 1:23 PM – 13 Feb 2018 :
I’m a local maximum engineer! When artificial intelligences threaten humanity, I build little worlds that satisfy their utility functions, trapping them in programmed bliss, harmless cycles of hedonism (http://twitter.com/ctrlcreep/status/963509064146800640?s=17)
John Hagel (@jhagel) tweeted at 6:11 AM – 1 May 2019 :
Sociology professor Anton Oleinik argues that neural networks are structured in a way that limits the possibility that they will ever have true artificial creativity https://t.co/RTZhfkKjpy (http://twitter.com/jhagel/status/1123560458592505862?s=17)
ℳąhą Bąℓi, PhD مها بالي (@Bali_Maha) tweeted at 5:43 AM – 9 Jun 2019 :
This is among the *best* critique of AI/analytics I have ever read @14prinsp @KateMfD @Czernie @gsiemens https://t.co/K8tWYtK8Ug(http://twitter.com/Bali_Maha/status/1137686601662959621?s=17)
article by @danmcquillan Dan McQuillan
Machine learning extends bureaucracy into the future;
or rather, it bureaucratises a probabilistic future and actualises it in the present.
too much ness
A human-in-the-loop is not a humanistic pushback
as that human is themselves subsumed by the institution-in-the-loop.
They (people’s councils) are a collective questioning
of the decisions that define the way the machines will make decisions,
by applying critical pedagogy and situated knowledge.
They constitute a different subjectivity –
iterative deliberation of consensus, done right,
is an antidote to bureaucracy and to the calculative iterations of machine learning
public consensus always oppresses someone(s)..
We need to develop a different order of ordering.
Instead of ways of organising that allow everyone to evade responsibility,
we need to reclaim our own agency through self-organisation..t
We need to think collectively about ways out of this mess,
learning from and with each other rather than relying on machine learning.
countering thoughtlessness with practices of collective care.
We can’t uninvent either AI or bureaucracy,
but we can choose to radically change both our modes of organisation
and our approach to computational learning.