Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an academic field of study which studies the goal of creating intelligence. Major AI researchers and textbooks define this field as “the study and design of intelligent agents”, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.
AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is still among the field’s long term goals. …
The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—”can be so precisely described that a machine can be made to simulate it.” This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.
Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as “strong AI“, “full AI” or as the ability to perform “general intelligent action”.
Artificial general intelligence research
Artificial general intelligence (AGI) describes research that aims to create machines capable of general intelligent action. The term was introduced by Mark Gubrud in 1997 in a discussion of the implications of fully automated military production and operations. ……. As yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in “The Singularity is Near” (i.e. between 2015 and 2045) is plausible. Most mainstream AI researchers doubt that progress will be this rapid. Organizations actively pursuing AGI include Adaptive AI, the Machine Intelligence Research Institute, the OpenCog Foundation, Bitphase AI, TexAI., Numenta and the associated Redwood Neuroscience Institute, and AND Corporation.
ai for peace – Tomi Honkela
adding page while (still) reading through this article:
Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism,unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values —the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished. As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow? Because it is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways
@PeaceTechLabWhen he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this.[..]If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.[..]There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules). This notion dates to the early 1940s, when it occurred to researchers that the best model for flexible automated intelligence was the brain itself. A brain, after all, is just a bunch of widgets, called neurons, that either pass along an electrical charge to their neighbors or don’t. What’s important are less the individual neurons themselves than the manifold connections among them. This structure, in its simplicity, has afforded the brain a wealth of adaptive advantages. The brain can operate in circumstances in which information is poor or missing; it can withstand significant damage without total loss of control; it can store a huge amount of knowledge in a very efficient way; it can isolate distinct patterns but retain the messiness necessary to handle ambiguity.[..]If you have millions of different voters that can associate in billions of different ways, you can learn to classify data with *incredible granularity
Cognitive computing is not cognitive at all » Banking Technology bankingtech.com/829352/cogniti…
People learn from conversation and Google can’t have one.
i’d suggest ai as:
ch 9 – artificial intelligence – the eclipse of human discretion
often the researchers involved have displayed a lack of curiosity for any form of intelligence beyond that they recognized in themselves.. and a marked lack of appreciation for the actual depth and variety of human talent. .. t
project to develop ai has very often nurtured a special kind of stupidity in some of its most passionate supporters – a particular sort of arrogant ignorance that only afflicts those of high intellect..
as reach teaching machine to think.. no longer thought of as ai.. which is progressively redefined as something perpetually out of reach
Andrew Ng (@AndrewYNg) tweeted at 12:04 PM – 1 May 2017 :
If you’re trying to understand AI’s near-term impact, don’t think “*sentience.” Instead think “automation on steroids.” (http://twitter.com/AndrewYNg/status/859106360662806529?s=17)
*Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience).
hoping.. there are some creative tasks technical systems will simply never be able to perform
the essence of learning, though, whether human or machinic, is developing the ability to detect, recognize and eventually reproduce patterns. and what poses problems for this line of argument (or hope, whichever it may be) is that many if not all of the greatest works of art – the things we regard as occupying the very pinnacle of human aspiration and achievement – consist of little other than patterns… rich/varied.. but nothing magical..
the humanist in me recoils at what seems like the brute-force reductionism of statements like this, but beyond some ghostly ‘inspiration’ it’s hard to distinguish what constitutes style other than habitual arrangements, whether those be palettes, chord progressions, or frequencies of word use and sentence structure. and these are just the sort of feature that are ready-made for extraction via algorithm.
habitual arrangements.. chord progressions.. on composing/orchestration ness – ben folds composes in 10 min – orchestra knows what to play – plays in sync
everyone will have their own fav ies of an art that seems as if it must transcend reduction. for me, it’s the oval phrasing of nina simone.. the ache and steel of life in her voice..
the notion that everything i hear might be flattened to a series of instructions and executed by machine..
‘to extract the features that make rembrandt rembrandt..’ for algo’d.. next rembrandt
alphago isn’t just one thing, but a stack of multiple kinds of neural network and learning algorithm laminated together
deep blue ..a special purpose engine exquisitely optimized for – and there fore completely useless at anything other than – the rules of chess.. alphgo is a general learning machine..
.. simply brute force. that may well have been how deep blue beat kasparov. it is not how alphago defeated lee sedol.. for many i suspect, next rembrandt will feel like a more ominous development than alphago
constructed bushido is unquestionably something that resides in the human heart, or does not…this matters when we describe a machine, however casually, as possessing this spirit.
points toward a time when just about any human skill can be mined for its implicit rules and redefined as an exercise in pattern recognition and reproduction, even those seemingly most dependent on soulful improvisation
what we now confront is the possibility of machines transcending our definitions of mastery,pushing outward into an enormously expanded envelope of performance..t
so maybe the issue is.. with mastery and performance.. maybe those things aren’t the soul of a humanity.. and so.. they are able to be algo’d.. but don’t rep/define us
lee sedo: i’t not a human move. i’ve never seen a human play this move. so beautiful.
so to with flying.. et al.. doesn’t mean it’s more human.. means it’s augmenting a human/animal performance..
the ai player, *unbound by the structural limitations, the conventions of taste or the inherent prejudices of human play, explore fundamentally different pathways – and again, there’s an aesthetic component to the sheer otherness of its thought.. t
thinking *this is what we need ai ness to do.. to get us back to us.. as a means to listen to each of us.. everyday.. w/o agenda/judgment.. et al..
i don’t know what it will feel like to be human in that posthuman moment. i don’t think any of us truly do. any advent of an autonomous intelligence greater than our own can only be something like a divide-by-zero operation performed on all our ways of weighing the world, introducing a factor of infinity into a calculus that isn’t capable of containing it..t
i don’t know.. maybe it’s that ability to divide by zero that will blur our mathematical lines/assumptions of what it means to know things.. even what it means to be human
eagle and condor ness
Roger Schank (@rogerschank) tweeted at 6:43 AM – 10 Jan 2018 :
BBC News – CES 2018: When will AI deliver for humans? https://t.co/85gypk6ffP thanks for a truthful AI article @BBCRoryCJ (http://twitter.com/rogerschank/status/951087127688830976?s=17)
Clive Thompson (@pomeranian99) tweeted at 5:21 AM – 2 Mar 2018 :
Google created in-house training to teach staff machine learning; they’ve now turned it into publicly-available online tutorials — https://t.co/k7JNiJ3QOh(http://twitter.com/pomeranian99/status/969548191258546176?s=17)
via céline keller rt
Joanna J Bryson (@j2bryson) tweeted at 6:14 AM – 27 May 2018 :
@krustelkram There are simple & well established defs of intelligence, but while people confound it with „humanlike“ they ignore dictionaries. I’m working on a book, but meantime https://t.co/q2Yj92fZoE (http://twitter.com/j2bryson/status/1000711830162038785?s=17)
Johannes Klingebiel (@Klingebeil) tweeted at 6:16 AM – 27 May 2018 :
@krustelkram I always refer to this article, because it nicely sums up the problem with “AI” https://t.co/NtY6C208ct (http://twitter.com/Klingebeil/status/1000712267665694720?s=17)
no one is quite sure what the phrase even means.
So let’s not continue down this path by referring to these problem-solving, pattern-recognizing machines “artificial intelligence.” We’re just building tools like we’ve always done, and acting as agents in the exciting process of cognitive evolution.