ai

wikipedia small

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an academic field of study which studies the goal of creating intelligence. Major AI researchers and textbooks define this field as “the study and design of intelligent agents”, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.

AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is still among the field’s long term goals. …

The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—”can be so precisely described that a machine can be made to simulate it.” This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks. Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.

________

agi

wikipedia small

Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as “strong AI“, “full AI” or as the ability to perform “general intelligent action”.

[..]

Artificial general intelligence research

Artificial general intelligence (AGI) describes research that aims to create machines capable of general intelligent action. The term was introduced by Mark Gubrud in 1997 in a discussion of the implications of fully automated military production and operations. ……. As yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in “The Singularity is Near” (i.e. between 2015 and 2045) is plausible. Most mainstream AI researchers doubt that progress will be this rapid. Organizations actively pursuing AGI include Adaptive AI, the Machine Intelligence Research Institute, the OpenCog Foundation, Bitphase AI, TexAI., Numenta and the associated Redwood Neuroscience Institute, and AND Corporation.

Ray Kurzweil

Ada Lovelace

Alan Turing

Ben Goertzel

Roger Schank

open ai

ai for peace – Tomi Honkela

________

first recollection of seeing this as something (humane).. when Bernd suggested i look into Monica‘s work.

Maurice Conti

tech aug

_________

adding page while (still) reading through this article:

Jason Silva (@JasonSilva)”the ability to create new explanations is the unique, morally & intellectually significant functionality of people”aeon.co/magazine/techn…

aeon.co/magazine/technology/david-deutsch-artificial-intelligence
‘it ain’t what we don’t know that causes trouble, it’s what we know for sure that just ain’t so’ (and if you know that sage was Mark Twain, then what you know ain’t so either).
Yet that would have achieved nothing except an increase in the error rate, due to increased numbers of glitches in the more complex machinery. Similarly, the humans, given different instructions but no hardware changes, would have been capable of emulating every detail of the
what if increase error is needed
Similarly, the humans, given different instructions but no hardware changes, would have been capable of emulating every detail of the Difference Engine’s method — and doing so would have been just as perverse. It would not have copied the Engine’s main advantage, its accuracy, which was due to hardware not software. I
again… not human.. no?
Experiencing boredom was one of many cognitive tasks at which the Difference Engine would have been hopelessly inferior to humans
Nor was it capable of knowing or proving, as Babbage did, that the two algorithms would give identical results if executed accurately. Still less was it capable of wanting, as he did, to benefit seafarers and humankind in general.
But it is the other camp’s basic mistake that is responsible for the lack of progress. It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program. Nor can it be achieved merely by improving their performance at tasks that they currently do perform, no matter by how muc

Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism,unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values —the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished. As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow? Because it is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways

Furthermore, despite the above-mentioned enormous variety of things that we create explanations about, our core method of doing so, namely Popperian conjecture and criticism, has a single, unified, logic. Hence the term ‘general’ in AGI. A computer program either has that yet-to-be-fully-understood logic, in which case it can perform human-type thinking about anything, including its own thinking and how to improve it, or it doesn’t, in which case it is in no sense an AGI. Consequently, another hopeless approach to AGI is to start from existing knowledge of how to program specific tasks — such as playing chess, performing statistical analysis or searching databases — and then to try to improve those programs in the hope that this will somehow generate AGI as a side effect, as happened to Skynet in the Terminator films.
Nowadays, an accelerating stream of marvellous and useful functionalities for computers are coming into use, some of them sooner than had been foreseen even quite recently. But what is neither marvellous nor useful is the argument that often greets these developments, that they are reaching the frontiers of AGI. An especially severe outbreak of this occurred recently when a search engine called Watson, developed by IBM, defeated the best human player of a word-association database-searching game called Jeopardy. ‘Smartest machine on Earth’, the PBS documentary series Nova called it, and characterised its function as ‘mimicking the human thought process with software.’ But that is precisely what it does not do.
The thing is, playing Jeopardy — like every one of the computational functionalities at which we rightly marvel today — is firmly among the functionalities that can be specified in the standard, behaviourist way that I discussed above. No Jeopardy answer will ever be published in a journal of new discoveries. The fact that humans perform that task less well by using creativity to generate the underlying guesses is not a sign that the program has near-human cognitive abilities. The exact opposite is true, for the two methods are utterly different from the ground up. Likewise, when a computer program beats a grandmaster at chess, the two are not using even remotely similar algorithms. The grandmaster can explain why it seemed worth sacrificing the knight for strategic advantage and can write an exciting book on the subject. The program can only prove that the sacrifice does not force a checkmate, and cannot write a book because it has no clue even what the objective of a chess game is. Programming AGI is not the same sort of problem as programming Jeopardy or chess.
This does not surprise people in the first camp, the dwindling band of opponents of the very possibility of AGI. But for the people in the other camp (the AGI-is-imminent one) such a history of failure cries out to be explained — or, at least, to be rationalised away. And indeed, unfazed by the fact that they could never induce such rationalisations from experience as they expect their AGIs to do, they have thought of many.
AGIs will indeed be capable of self-awareness — but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves. This does not mean that apes who pass the mirror test have any hint of the attributes of ‘general intelligence’ of which AGI would be an artificial version. Indeed, Richard Byrne’s wonderful research into gorilla memes has revealed how apes are able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is behaviouristic.
_______

http://en.wikipedia.org/wiki/Simulacra_and_Simulation

posted on fb by jean russell
_________
@shimelfarb
Good look at how #AI is demolishing language barriers. #peacetech in the making.
begs we approach the limit of 7 bn idiosyncratic jargons.. new every day..
@PeaceTechLab

Long, but valuable read: everything you need to know about machine learning & #AI for 2017: ow.ly/afoG307DTsr via @nytimes

When he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this.
[..]
If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.
[..]
There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules). This notion dates to the early 1940s, when it occurred to researchers that the best model for flexible automated intelligence was the brain itself. A brain, after all, is just a bunch of widgets, called neurons, that either pass along an electrical charge to their neighbors or don’t. What’s important are less the individual neurons themselves than the manifold connections among them. This structure, in its simplicity, has afforded the brain a wealth of adaptive advantages. The brain can operate in circumstances in which information is poor or missing; it can withstand significant damage without total loss of control; it can store a huge amount of knowledge in a very efficient way; it can isolate distinct patterns but retain the messiness necessary to handle ambiguity.
[..]
If you have millions of different voters that can associate in billions of different ways, you can learn to classify data with *incredible granularity
*ginormous small ness.. like the potential of hosting-life-bits via self-talk as data.. aka: approaching the limit of 7 bn idiosyncratic jargons.. new every day..
become indigenous
___________
from Roger:

@rogerschank

Cognitive computing is not cognitive at all » Banking Technology bankingtech.com/829352/cogniti…

People learn from conversation and Google can’t have one.

what computers can’t do .. ai.. algo .. ness..

__________
i’d suggest ai as:

augmenting interconnectedness

__________

from Adam Greenfield‘s radical techs:

4189

ch 9 – artificial intelligence – the eclipse of human discretion

4196

often the researchers involved have displayed a lack of curiosity for any form of intelligence beyond that they recognized in themselves.. and a marked lack of appreciation for the actual depth and variety of human talent. .. t

project to develop ai has very often nurtured a special kind of stupidity in some of its most passionate supporters – a particular sort of arrogant ignorance that only afflicts those of high intellect..

as reach teaching machine to think.. no longer thought of as ai.. which is progressively redefined as something perpetually out of reach

rt yesterday:

Andrew Ng (@AndrewYNg) tweeted at 12:04 PM – 1 May 2017 :

If you’re trying to understand AI’s near-term impact, don’t think “*sentience.” Instead think “automation on steroids.” (http://twitter.com/AndrewYNg/status/859106360662806529?s=17)

*Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience).

4207

hoping.. there are some creative tasks technical systems will simply never be able to perform

the essence of learning, though, whether human or machinic, is developing the ability to detect, recognize and eventually reproduce patterns. and what poses problems for this line of argument (or hope, whichever it may be) is that many if not all of the greatest works of art – the things we regard as occupying the very pinnacle of human aspiration and achievement – consist of little other than patterns… rich/varied.. but nothing magical..

the humanist in me recoils at what seems like the brute-force reductionism of statements like this, but beyond some ghostly ‘inspiration’ it’s hard to distinguish what constitutes style other than habitual arrangements, whether those be palettes, chord progressions, or frequencies of word use and sentence structure. and these are just the sort of feature that are ready-made for extraction via algorithm.

habitual arrangements.. chord progressions.. on composing/orchestration ness – ben folds composes in 10 min – orchestra knows what to play – plays in sync

[https://www.youtube.com/watch?v=BytUY_AwTUs]

everyone will have their own fav ies of an art that seems as if it must transcend reduction. for me, it’s the oval phrasing of nina simone.. the ache and steel of life in her voice..

4218

the notion that everything i hear might be flattened to a series of instructions and executed by machine..

4230

‘to extract the features that make rembrandt rembrandt..’ for algo’d.. next rembrandt

4263

alphago isn’t just one thing, but a stack of multiple kinds of neural network and learning algorithm laminated together

4275

deep blue ..a special purpose engine exquisitely optimized for – and there fore completely useless at anything other than – the rules of chess.. alphgo is a general learning machine..

4286

.. simply brute force. that may well have been how deep blue beat kasparov. it is not how alphago defeated lee sedol.. for many i suspect, next rembrandt will feel like a more ominous development than alphago

4310

constructed bushido is unquestionably something that resides in the human heart, or does not…this matters when we describe a machine, however casually, as possessing this spirit.

4321

points toward a time when just about any human skill can be mined for its implicit rules and redefined as an exercise in pattern recognition and reproduction, even those seemingly most dependent on soulful improvisation

improv\e and algo ness

4332

what we now confront is the possibility of machines transcending our definitions of mastery,pushing outward into an enormously expanded envelope of performance..t

so maybe the issue is.. with mastery and performance.. maybe those things aren’t the soul of a humanity.. and so.. they are able to be algo’d.. but don’t rep/define us

lee sedo: i’t not a human move. i’ve never seen a human play this move. so beautiful.

so to with flying.. et al.. doesn’t mean it’s more human.. means it’s augmenting a human/animal performance..

the ai player, *unbound by the structural limitations, the conventions of taste or the inherent prejudices of human play, explore fundamentally different pathways – and again, there’s an aesthetic component to the sheer otherness of its thought.. t

thinking *this is what we need ai ness to do.. to get us back to us.. as a means to listen to each of us.. everyday.. w/o agenda/judgment.. et al..

4380

i don’t know what it will feel like to be human in that posthuman moment. i don’t think any of us truly do. any advent of an autonomous intelligence greater than our own can only be something like a divide-by-zero operation performed on all our ways of weighing the world, introducing a factor of infinity into a calculus that isn’t capable of containing it..t

i don’t know.. maybe it’s that ability to divide by zero that will blur our mathematical lines/assumptions of what it means to know things.. even what it means to be human

eagle and condor ness