what computers can’t do

book by Hubert Dreyfus

1972 book *What Computers Can’t Do, revised first in 1979, and then again in 1992 with a new introduction as What Computers Still Can’t Do.

*Hubert Dreyfus has been a critic of artificial intelligence research since the 1960s. In a series of papers and books, including Alchemy and AI (1965), What Computers Can’t Do (1972; 1979; 1992) and Mind over Machine (1986), he presented a pessimistic assessment of AI’s progress and a critique of the philosophical foundations of the field. Dreyfus’ objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2003), the standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.

Dreyfus argued that human intelligence and expertise depend primarily on unconscious instincts rather than conscious symbolic manipulation, and that these unconscious skills could never be captured in formal rules.

His critique was based on the insights of modern continental philosophers such as Merleau-Ponty and Heidegger, and was directed at the first wave of AI research which used high level formal symbols to represent reality and tried to reduce intelligence to symbol manipulation.

embodiment (process of)

When Dreyfus’ ideas were first introduced in the mid-1960s, they were met with ridicule and outright hostility. By the 1980s, however, many of his perspectives were rediscovered by researchers working in robotics and the new field of connectionism—approaches now called “sub-symbolic” because they eschew early AI research’s emphasis on high level symbols. In the 21st century, statistics-based approaches to machine learning simulate the way that the brain uses unconscious instincts to perceive, notice anomalies and make quick judgements. These techniques are highly successful and are currently widely used in both industry and academia. Historian and AI researcher Daniel Crevier writes: “time has proven the accuracy and perceptiveness of some of Dreyfus’s comments.” Dreyfus said in 2007, “I figure I won and it’s over—they’ve given up

The grandiose promises of artificial intelligence[edit]

In Alchemy and AI (1965) and What Computers Can’t Do (1972), Dreyfus summarized the history of artificial intelligence and ridiculed the unbridled optimism that permeated the field. For example, Herbert A. Simon, following the success of his program General Problem Solver (1957), predicted that by 1967:

A computer would be world champion in chess.

A computer would discover and prove an important new mathematical theorem.

Most theories in psychology will take the form of computer programs.

The press reported these predictions in glowing reports of the imminent arrival of machine intelligence.

Dreyfus felt that this optimism was totally unwarranted. He believed that they were based on false assumptions about the nature of human intelligence. Pamela McCorduck explains Dreyfus position:

[A] great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.

These predictions were based on the success of an “information processing” model of the mind, articulated by Newell and Simon in their physical symbol systems hypothesis, and later expanded into a philosophical position known as computationalism by philosophers such as Jerry Fodor and Hilary Putnam. Believing that they had successfully simulated the essential process of human thought with simple programs, it seemed a short step to producing fully intelligent machines. However, Dreyfus argued that philosophy, especially 20th-century philosophy, had discovered serious problems with this information processing viewpoint. The mind, according to modern philosophy, is nothing like a computer.

Dreyfus’ four assumptions of artificial intelligence research

In Alchemy and AI and What Computers Can’t Do, Dreyfus identified four philosophical assumptions that supported the faith of early AI researchers that human intelligence depended on the manipulation of symbols. “In each case,” Dreyfus writes, “the assumption is taken by workers in [AI] as an axiom, guaranteeing results, whereas it is, in fact, one hypothesis among others, to be tested by the success of such work.”

The biological assumption
The brain processes information in discrete operations by way of some biological equivalent of on/off switches.

In the early days of research into neurology, scientists realized that neurons fire in all-or-nothing pulses. Several researchers, such as Walter Pitts and Warren McCulloch, argued that neurons functioned similar to the way Boolean logic gates operate, and so could be imitated by electronic circuitry at the level of the neuron. When digital computers became widely used in the early 50s, this argument was extended to suggest that the brain was a vast physical symbol system, manipulating the binary symbols of zero and one. Dreyfus was able to refute the biological assumption by citing research in neurology that suggested that the action and timing of neuron firing had analog components. To be fair, however, Daniel Crevier observes that “few still held that belief in the early 1970s, and nobody argued against Dreyfus” about the biological assumption.

The psychological assumption
The mind can be viewed as a device operating on bits of information according to formal rules.

He refuted this assumption by showing that much of what we “know” about the world consists of complex attitudes or tendencies that make us lean towards one interpretation over another. He argued that, even when we use explicit symbols, we are using them against an unconscious background of commonsense knowledge and that without this background our symbols cease to mean anything. This background, in Dreyfus’ view, was not implemented in individual brains as explicit individual symbols with explicit individual meanings.

The epistemological assumption
All knowledge can be formalized.

This concerns the philosophical issue of epistemology, or the study of knowledge. Even if we agree that the psychological assumption is false, AI researchers could still argue (as AI founder John McCarthy has) that it was possible for a symbol processing machine to represent all knowledge, regardless of whether human beings represented knowledge the same way. Dreyfus argued that there was no justification for this assumption, since so much of human knowledge was not symbolic.

The ontological assumption
The world consists of independent facts that can be represented by independent symbols

Dreyfus also identified a subtler assumption about the world. AI researchers (and futurists and science fiction writers) often assume that there is no limit to formal, scientific knowledge, because they assume that any phenomenon in the universe can be described by symbols or scientific theories. This assumes that everything that exists can be understood as objects, properties of objects, classes of objects, relations of objects, and so on: precisely those things that can be described by logic, language and mathematics. The question of what exists is called ontology, and so Dreyfus calls this the ontological assumption. If this is false, then it raises doubts about what we can ultimately know and what intelligent machines will ultimately be able to help us to do.

Knowing-how vs. knowing-that: the primacy of intuition

In Mind Over Machine (1986), written during the heyday of expert systems, Dreyfus analyzed the difference between human expertise and the programs that claimed to capture it. This expanded on ideas from What Computers Can’t Do, where he had made a similar argument criticizing the “cognitive simulation” school of AI research practiced by Allen Newell and Herbert A. Simon in the 1960s.

Dreyfus argued that human problem solving and expertise depend on our background sense of the context, of what is important and interesting given the situation, rather than on the process of searching through combinations of possibilities to find what we need. Dreyfus would describe it in 1986 as the difference between “knowing-that” and “knowing-how”, based on Heidegger’s distinction of present-at-hand and ready-to-hand.

Knowing-that is our conscious, step-by-step problem solving abilities. We use these skills when we encounter a difficult problem that requires us to stop, step back and search through ideas one at time. At moments like this, the ideas become very precise and simple: they become context free symbols, which we manipulate using logic and language. These are the skills that Newell and Simon had demonstrated with both psychological experiments and computer programs. Dreyfus agreed that their programs adequately imitated the skills he calls “knowing-that.”

Knowing-how, on the other hand, is the way we deal with things normally. We take actions without using conscious symbolic reasoning at all, as when we recognize a face, drive ourselves to work or find the right thing to say. We seem to simply jump to the appropriate response, without considering any alternatives. This is the essence of expertise, Dreyfus argued: when our intuitions have been trained to the point that we forget the rules and simply “size up the situation” and react.

The human sense of the situation, according to Dreyfus, is based on our goals, our bodies and our culture—all of our unconscious intuitions, attitudes and knowledge about the world. This “context” or “background” (related to Heidegger’s Dasein) is a form of knowledge that is not stored in our brains symbolically, but intuitively in some way. It affects what we notice and what we don’t notice, what we expect and what possibilities we don’t consider: we discriminate between what is essential and inessential. The things that are inessential are relegated to our “fringe consciousness” (borrowing a phrase from William James): the millions of things we’re aware of, but we’re not really thinking about right now.

Dreyfus does not believe that AI programs, as they were implemented in the 70s and 80s, could capture this “background” or do the kind of fast problem solving that it allows. He argued that our unconscious knowledge could never be captured symbolically. If AI could not find a way to address these issues, then it was doomed to failure, an exercise in “tree climbing with one’s eyes on the moon.”


Dreyfus began to formulate his critique in the early 1960s while he was a professor at MIT, then a hotbed of artificial intelligence research. His first publication on the subject is a half-page objection to a talk given by Herbert A. Simon in the spring of 1961. Dreyfus was especially bothered, as a philosopher, that AI researchers seemed to believe they were on the verge of solving many long standing philosophical problems within a few years, using computers.

Alchemy and AI

In 1965, Dreyfus was hired (with his brother Stuart Dreyfus’ help) by Paul Armer to spend the summer at RAND Corporation’s Santa Monica facility, where he would write Alchemy and AI, the first salvo of his attack. Armer had thought he was hiring an impartial critic and was surprised when Dreyfus produced a scathing paper intended to demolish the foundations of the field. (Armer stated he was unaware of Dreyfus’ previous publication.) Armer delayed publishing it, but ultimately realized that “just because it came to a conclusion you didn’t like was no reason not to publish it.” It finally came out as RAND Memo and soon became a best seller.

The paper flatly ridiculed AI research, comparing it to alchemy: a misguided attempt to change metals to gold based on a theoretical foundation that was no more than mythology and wishful thinking. It ridiculed the grandiose predictions of leading AI researchers, predicting that there were limits beyond which AI would not progress and intimating that those limits would be reached soon.


The paper “caused an uproar”, according to Pamela McCorduck. The AI community’s response was derisive and personal. Seymour Papert dismissed one third of the paper as “gossip” and claimed that every quotation was deliberately taken out of context. Herbert A. Simon accused Dreyfus of playing “politics” so that he could attach the prestigious RAND name to his ideas. Simon said, “what I resent about this was the RAND name attached to that garbage”.

Dreyfus, who taught at MIT, remembers that his colleagues working in AI “dared not be seen having lunch with me.” Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus’ positions, he recalls “I became the only member of the AI community to be seen eating lunch with Dreyfus. And I deliberately made it plain that theirs was not the way to treat a human being.”

The paper was the subject of a short in The New Yorker magazine on June 11, 1966. The piece mentioned Dreyfus’ contention that, while computers may be able to play checkers, no computer could yet play a decent game of chess. It reported with wry humor (as Dreyfus had) about the victory of a ten-year-old over the leading chess program, with “even more than its usual smugness.”

In hope of restoring AI’s reputation, Seymour Papert arranged a chess match between Dreyfus and Richard Greenblatt’s Mac Hack program. Dreyfus lost, much to Papert’s satisfaction. An Association for Computing Machinery bulletin used the headline:

“A Ten Year Old Can Beat the Machine— Dreyfus: But the Machine Can Beat Dreyfus[28]

Dreyfus complained in print that he hadn’t said a computer will never play chess, to which Herbert A. Simon replied: “You should recognize that some of those who are bitten by your sharp-toothed prose are likely, in their human weakness, to bite back … may I be so bold as to suggest that you could well begin the cooling—a recovery of your sense of humor being a good first step.”


By the early 1990s several of Dreyfus’ radical opinions had become mainstream.

Failed predictions. As Dreyfus had foreseen, the grandiose predictions of early AI researchers failed to come true. Fully intelligent machines (now known as “strong AI”) did not appear in the mid-1970s as predicted. HAL 9000 (whose capabilities for natural language, perception and problem solving were based on the advice and opinions of Marvin Minsky) did not appear in the year 2001. “AI researchers”, writes Nicolas Fearn, “clearly have some explaining to do.”[30]Today researchers are far more reluctant to make the kind of predictions that were made in the early days. (Although some futurists, such as Ray Kurzweil, are still given to the same kind of optimism.)

The biological assumption, although common in the forties and early fifties, was no longer assumed by most AI researchers by the time Dreyfus published What Computers Can’t Do. Although many still argue that it is essential to reverse-engineer the brain by simulating the action of neurons (such as Ray Kurzweil or Jeff Hawkins), they don’t assume that neurons are essentially digital, but rather that the action of analog neurons can be simulated by digital machines to a reasonable level of accuracy. (Alan Turing had made this same observation as early as 1950.)

The psychological assumption and unconscious skills. Many AI researchers have come to agree that human reasoning does not consist primarily of high-level symbol manipulation. In fact, since Dreyfus first published his critiques in the 60s, AI research in general has moved away from high level symbol manipulation or “GOFAI”, towards new models that are intended to capture more of our unconscious reasoning. Daniel Crevier writes that by 1993, unlike 1965, AI researchers “no longer made the psychological assumption”, and had continued forward without it.

In the 1980s, these new “sub-symbolic” approaches included:

  • Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning. Dreyfus himself agrees that these sub-symbolic methods can capture the kind of “tendencies” and “attitudes” that he considers essential for intelligence and expertise.[34]
  • Research into commonsense knowledge has focussed on reproducing the “background” or context of knowledge.
  • Robotics researchers like Hans Moravec and Rodney Brooks were among the first to realize that unconscious skills would prove to be the most difficult to reverse engineer. (See Moravec’s paradox.) Brooks would spearhead a movement in the late 80s that took direct aim at the use of high-level symbols, called Nouvelle AI. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention.

In the 1990s and the early decades of the 21st century, statistics-based approaches to machine learning used techniques related to economics and statistics to allow machines to “guess” – to make inexact, probabilistic decisions and predictions based on experience and learning. These programs simulate the way our unconscious instincts are able perceive, notice anomalies and make quick judgements, similar to what Dreyfus called “sizing up the situation and reacting”, but here the “situation” consists of vast amounts of numerical data. These techniques are highly successful and are currently widely used in both industry and academia.

This research has gone forward without any direct connection to Dreyfus’ work.

Knowing-how and knowing-that. Research in psychology and economics has been able to show that Dreyfus’ (and Heidegger’s) speculation about the nature of human problem solving was essentially correct. Daniel Kahnemann and Amos Tversky collected a vast amount of hard evidence that human beings use two very different methods to solve problems, which they named “system 1” and “system 2”. System one, also known as the adaptive unconscious, is fast, intuitive and unconscious. System 2 is slow, logical and deliberate. Their research was collected in the book Thinking, Fast and Slow, and inspired Malcolm Gladwell‘s popular book Blink. As with AI, this research was entirely independent of both Dreyfus and Heidegger.


Although clearly AI research has come to agree with Dreyfus, McCorduck claimed that “my impression is that this progress has taken place piecemeal and in response to tough given problems, and owes nothing to Dreyfus.”

The AI community, with a few exceptions, chose not to respond to Dreyfus directly. “He’s too silly to take seriously” a researcher told Pamela McCorduck. Marvin Minsky said of Dreyfus (and the other critiques coming from philosophy) that “they misunderstand, and should be ignored.” When Dreyfus expanded Alchemy and AI to book length and published it as What Computers Can’t Do in 1972, no one from the AI community chose to respond (with the exception of a few critical reviews). McCorduck asks “If Dreyfus is so wrong-headed, why haven’t the artificial intelligence people made more effort to contradict him?”

Part of the problem was the kind of philosophy that Dreyfus used in his critique. Dreyfus was an expert in modern European philosophers (like Heidegger and Merleau-Ponty). AI researchers of the 1960s, by contrast, based their understanding of the human mind on engineering principles and efficient problem solving techniques related to management science. On a fundamental level, they spoke a different language. Edward Feigenbaum complained, “What does he offer us? Phenomenology! That ball of fluff. That cotton candy!” In 1965, there was simply too huge a gap between European philosophy and artificial intelligence, a gap that has since been filled by cognitive science, connectionism and robotics research. It would take many years before artificial intelligence researchers were able to address the issues that were important to continental philosophy, such as situatedness, embodiment, perception and gestalt.

Another problem was that he claimed (or seemed to claim) that AI would never be able to capture the human ability to understand context, situation or purpose in the form of rules. But (as Peter Norvig and Stuart Russell would later explain), an argument of this form can not be won: just because one can not imagine formal rules that govern human intelligence and expertise, this does not mean that no such rules exist. They quote Alan Turing’s answer to all arguments similar to Dreyfus’:

“we cannot so easily convince ourselves of the absence of complete laws of behaviour … The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, ‘We have searched enough. There are no such laws.'”

Dreyfus did not anticipate that AI researchers would realize their mistake and begin to work towards new solutions, moving away from the symbolic methods that Dreyfus criticized. In 1965, he did not imagine that such programs would one day be created, so he claimed AI was impossible. In 1965, AI researchers did not imagine that such programs were necessary, so they claimed AI was almost complete. Both were wrong.

A more serious issue was the impression that Dreyfus’ critique was incorrigibly hostile. McCorduck wrote, “His derisiveness has been so provoking that he has estranged anyone he might have enlightened. And that’s a pity.” Daniel Crevier stated that “time has proven the accuracy and perceptiveness of some of Dreyfus’s comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier.”


loc 26 (kindle)

the diff between the mathematical mind {esprit de geometrie) and the perceptive mind {esprit de finesse): the reason the mathematicians are not perceptive is that they do not see what is before them, and that, accustomed to the exact and plain principles of mathematics, and not reasoning till they have well inspected and arranged their principles, they are lost in matters of perception where the principles do not allow for such arrangement… these principles are so find and so numerous that a very delicate and very clear sense is needed to perceive them, and to judge rightly and justly when they are perceived, without for the most part being able to demonstrate them in order as in mathematics; because the principles are not known to us in the same way , and because it would be an endless matter to undertake it. we must see the matter at once, at one glance, and not by a process of reasoning, at least to a certain degree.. mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous.. the mind.. does it tacitly, naturally, and without technical rules. – pascal pensees


loc 70 (preface) – by Anthony Oettinger (https://en.wikipedia.org/wiki/Anthony_Oettinger)

he puts in question the basic role that rules play in accepted ideas of what constitutes a satisfactory scientific explanation

loc 84 (still preface)

he is too modern to ask his questions from a viewpoint that assumes that man and mind are somehow set apart from the physical universe and there fore not within reach of science… curious enough..dreyfus’s own philosophical arguments lead him to see digital computers as limited not so much be being mindless as by having no body….. the central statement of this theme is that ‘ a person experiences the objects of the world as already interrelated and full of meaning…there is no justification for the assumption that we first experience isolated facts/snapshots of facts or momentary views of snapshots of isolated facts and then give them significance… this is the point that contemporary philosophers such as heidegger and wittgenstein are trying to make.. this, dreyfus argues following merleau-ponty, is a consequence of our having bodies capable of an ongoing but unanalyzed mastery of their environment

loc 98 – acknowledgements – seymour papert

loc 113 –


Since the Greeks invented logic and geometry, the idea that all reasoning might be reduced to some kind of calculation so that all arguments could be settled once and for all has fascinated most of the Western tradition’s rigorous thinkers.

shaw communication law

Socrates was the first to give voice to this vision. The story of artificial intelligence might well begin around 450 B.C. when (according to Plato) Socrates demands of Euthyphro, a fellow Athenian who, in the name of piety, is about to turn in his own father for murder:

“I want to know what is characteristic of piety which makes all actions pious . . . that I may have it to turn to, and to use as a standard whereby to judge your actions and those of other men. >M Socrates

is asking Euthyphro for what modern computer theorists would call an “effective procedure,”

“a set of rules which tells us, from moment to moment, precisely how to behave.’

* 2 Plato generalized this demand for moral certainty into an epistemological demand.

According to Plato, all knowledge must be stateable in explicit definitions which anyone could apply. If one could not state his know-how in terms of such explicit instructions if his knowing how could not be converted into knowing that it was not knowledge but mere belief.

According to Plato, cooks, for example, who proceed by taste and intuition, and poets who work from inspiration, have no knowledge: what they do does not involve understanding and cannot be understood. More generally, what cannot be stated explicitly in precise instructions all areas of human thought which require skill, intuition, or a sense of tradition are relegated to some kind of arbitrary fumbling.

Thus Plato admits his instructions cannot be completely formalized. Similarly, a modern computer expert, Marvin Minsky, notes, after tentatively presenting a Platonic notion of effective procedure: “This attempt at definition is subject to the criticism that the interpretation of the rules is left to depend on some person or agent.”

aristotle: Yet it is not easy to find a formula by which we may determine how far and up to what point a man may go wrong before he incurs blame. But this difficulty of definition is inherent in every object of perception; such questions of degree are bound up with the circumstances of the individual case, where our only criterion is the perception. 4

The belief that such a total formalization of knowledge must be possible soon came to dominate Western thought. It a

Hobbes was the first to make explicit the syntactic conception of thought as calculation: “When a man reasons, he does nothing else but conceive a sum total from addition of parcels,” he wrote, “for REASON … is nothing but reckoning. . . .”

Leibniz, the inventor of the binary system, dedicated himself to working out the necessary unambiguous formal language. ..all knowledge could be expressed and brought together in one deductive system. On the basis of these numbers and the rules for their combination all problems could be solved and all controversies ended: 

In one of his “grant proposals” his explanations of how he could

reduce all thought to the manipulation of numbers

exact same time i’m reading this.. from Rob:

“we’re trying to do to writing and other language arts what we’ve already done to mathematics” medium.com/@hhschiaravall…

“We’re trying to turn something rich and interconnected into something discrete, objective and measurable.”

measuring things ness

fitting too with Benjamin‘s latest on ai and the new normal ..

if he had money enough and time Leibniz remarks: … Leibniz had only promises, but in the work of George Boole, a mathematician and logician working in the early nineteenth century, his program came one step nearer to reality. Like Hobbes, Boole

supposed that reasoning was calculating,

and he set out to “investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a Calculus. . . .”” Boolean algebra is a binary algebra for representing elementary logical functions. If “a” and “fr” represent variables, “.” represents “and,” ” + ” represents “or,” and “1” and “0” represent “true” and “false” respectively, then the rules governing logical manipulation can be written in algebraic form as follows: (id theorems – additive, multiplicative et al)

Western man was now ready to begin the calculation. Almost immediately, in the designs of Charles Babbage (1835), practice began to catch up to theory. Babbage designed what he called an “Analytic Engine” which, though never built, was to function exactly like a modern digital computer, using punched cards, combining logical and arithmetic operations, and making logical decisions along the way based upon the results of its previous computations.

An important feature of Babbage’s machine was that it was digital. There are two fundamental types of computing machines: analogue and digital. Analogue computers do not compute in the strict sense of the word. They operate by measuring the magnitude of physical quantities. Using physical quantities, such as voltage, duration, angle of rotation of a disk, and so forth, proportional to the quantity to be manipulated, they combine these quantities in a physical way and measure the result. A slide rule is a typical analogue computer. A digital computer as the word digit, Latin for “finger,” implies represents all quantities by discrete states, for example, relays which are open or closed, a dial which can assume any one often positions, and so on, and then literally counts in order to get its result.

Thus, whereas analogue computers operate with continuous quantities, all digital computers are discrete state machines. As A. M. Turing, famous for defining the essence of a digital computer, puts it: [Discrete state machines] move by sudden jumps or clicks from one quite definite state to another. These states are sufficiently different for the possibility of confusion between them to be ignored. Strictly speaking there are no such machines. Everything really moves continuously. But there are many kinds of machines which can profitably be thought ofas being discrete state machines. For instance in considering the switches for a lighting system it is a convenient fiction that each switch must be definitely on or definitely off. There must be intermediate positions, but for most purposes we can forget about them. 12 Babbage’s ideas were too advanced for the technology of his time, for there was no quick efficient way to represent and manipulate the digits. He had to use awkward mechanical means, such as the position of cogwheels, to represent the discrete states. ….since a digital computer operates with abstract symbols which can stand for anything, and logical operations which can relate anything to anything, any digital computer (unlike an analogue computer) is a universal machine. First, as Turing puts it, it can simulate any other digital computer. This special property of digital computers, that they can mimic any discrete state machine, is described by saying that they are universal machines. The existence of machines with this property has the important consequence that, considerations of speed apart, it is unnecessary to design various new machines to do various computing processes. They can all be done with one digital computer, suitably programmed for each case. It will be seen that as a consequence of this all digital computers are in a sense equivalent. 1

Second, and philosophically more significant, any process which can be formalized so that it can be represented as series of instructions for the manipulation of discrete elements, can, at least in principle, be reproduced by such a machine. Thus even an analogue computer, provided that the relation of its input to its output can be described by a precise mathematical function, can be simulated on a digital machine

Moreover, the rules were built into the circuits of the machine.Once the machine was programmed there was no need for interpretation; no appeal to human intuition and judgment.

This was just what Hobbes and Leibniz had ordered, and Martin Heidegger appropriately saw in cybernetics the culmination of the philosophical tradition. 15 *

Turing had grasped the possibility and provided the criterion for success, but his article ended with only the sketchiest suggestions about what to do next: We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision…1\the playing of chess, would be best….2\understand and speak English. This process could follow the normal teaching of a child. .. Again I do not know what the right answer is, but I think both approaches should be tried.

A technique was still needed for finding the rules which thinkers from Plato to Turing assumed must exist a technique for converting any practical activity such as playing chess or learning a language into the set of instructions Leibniz called a theory.

What was needed were rules for converting any sort of intelligent activity into a set of instructions. At this point Herbert Simon and Allen Newell, analyzing the way a student proceeded to solve logic problems, .. so-called algorithmic programs which follow an exhaustive method to arrive at a solution, but which rapidly become unwieldy when dealing with practical problems. This notion of a rule of practice provided a breakthrough

But Newell and Simon soon realized that even this approach was not general enough. T

All these kinds of information are heuristics things that aid discovery. Heuristics seldom provide infallible guidance. . . . Often they “work,” but the results are variable and success is seldom guaranteed.

it seemed that finally philosophical ambition had found the necessary technology: that the universal, high-speed computer had been given the rules for converting reasoning into reckoning. Simon and Newell sensed the importance of the moment and jubilantly announced that the era of intelligent machines was at hand….In short, we now have the elements of a theory of heuristic (as contrasted with algorithmic) problem solving; and we can use this theory both to understand human heuristic processes and to simulate such processes with digital computers. Intuition, insight, and learning are no longer exclusive possessions of humans: any large high-speed computer can be programmed to exhibit them also. 28 This field of research, dedicated to using digital computers to simulate intelligent behavior, soon came to be known as “artificial intelligence.”

But the term “artificial” does not mean that workers in artificial intelligence are trying to build an artificial man. ..Likewise, the term “intelligence” can be misleading. No one expects the resulting robot to reproduce everything that counts as intelligent behavior in human beings.

back then..

These last metaphysicians are staking everything on man’s ability to formalize his behavior; to bypass brain and body, and arrive, all the more surely, at the essence of rationality.

Computers have already brought about a technological revolution comparable to the Industrial Revolution.

we are so near the events that it is difficult to discern their significance. This much, however, is clear. Aristotle defined man as a rational animal, and since then reason has been held to be of the essence of man. ..if reason can be programmed into a computer, this will confirm an understanding of the nature of man, which Western thinkers have been groping toward for two thousand years but which they only now have the tools to express and implement. The incarnation of this intuition will drastically change our understanding of ourselves. If, on the other hand, artificial intelligence should turn out to be impossible, then we will have to distinguish human from artificial reason, and this too will radically change our view of ourselves. ..


The need for a critique of artificial reason is a special case of a general need for critical caution in the behavioral sciences. Chomsky remarks that in these sciences “there has been a natural but unfortunate tendency to ‘extrapolate,’ from the thimbleful of knowledge that has been attained in careful experimental work and rigorous data-processing, to issues of much wider significance and of great social concern.” He concludes that the experts have the responsibility of making clear the actual limits of their understanding and of the results they have so far achieved. A careful analysis of these limits will demonstrate that in virtually every domain of the social and behavioral sciences the results achieved to date will not support such “extrapolation.”29 Artificial intelligence, at first glance, seems to be a happy exception to this pessimistic principle.

Thinkers in both camps have failed to ask the preliminary question whether machines can in fact exhibit even elementary skills like playing games, solving simple problems, reading simple sentences and recognizing patterns, presumably because they are under the impression, fostered by the press and artificial-intelligence researchers such as Minsky, that the simple tasks and even some of the most difficult ones have already been or are about to be accomplished. To begin with, then, these claims must be examined.

Unfortunately, the tenth anniversary of this historic talk went unnoticed, and workers in artificial intelligence did not, at any of their many national and international meetings, take time out from their progress reports to confront these predictions with the actual achievements. Now fourteen years have passed, and we are being warned that it may soon be difficult to control our robots. It is certainly high time to measure this original prophecy against reality.

It is essential to be aware at the outset that despite predictions, press releases, films, and warnings, artificial intelligence is a promise and not an accomplished fact. Only then can we begin our examination of the actual state and future hopes of artificial intelligence at a sufficiently rudimentary level

The field of artificial intelligence has many divisions and subdivisions, but the most important work can be classified into four areas: game playing, language translating, problem solving, and pattern recognition

If the order of argument presented above and the tone of my opening remarks seem strangely polemical for an effort in philosophical analysis, I can only point out that, as we have already seen, artificial intelligence is a field in which the rhetorical presentation of results often substitutes for research, so that research papers resemble more a debater’s brief than a scientific report. Such persuasive marshaling of facts can only be answered in kind. Thus the accusatory tone of Part I. In Part II, however, I have tried to remain as objective as possible in testing fundamental assumptions, although I know from experience that challenging these assumptions will produce reactions similar to those of an insecure believer when his faith is challenged.

However, in anticipation of the impending outrage I want to make absolutely clear from the outset that what I am criticizing is the implicit and explicit philosophical assumptions of Simon and Minsky and their co-workers, not their technical work. T

) An artifact could replace men in some tasks for example, those involved in exploring planets without performing the way human beings would and without exhibiting human flexibility. Research in this area is not wasted or foolish, although a balanced view of what can and cannot be expected of such an artifact would certainly be aided by a little philosophical perspective.

part 1


p 45 (pdf)

Phase I (1957-1962) Cognitive Simulation I. Analysis of Work in Language Translation, Problem Solving, and Pattern Recognition

p 3

Anthony Oettinger, the first to produce a mechanical dictionary (1954), recalls the climate of these early days: “The notion of . . . fully automatic high quality mechanical translation, planted by overzealous propagandists for automatic translation on both sides of the Iron Curtain and nurtured by the wishful thinking of potential users, blossomed like a vigorous weed.” This initial enthusiasm and the subsequent disillusionment provide a sort of paradigm for the field.

p 4

It was not sufficiently realized that the gap between such output . . . and high quality translation proper was still enormous, and that the problems solved until then were indeed many but just the simplest ones whereas the “few” remaining problems were the harder ones very hard indeed.

p 5

Much of the early work in the general area of artificial intelligence, especially work on game playing and problem solving, was inspired and dominated by the work of Newell, Shaw, and Simon at the RAND Corporation and at Carnegie Institute of Technology. 7 Their approach is called Cognitive Simulation (CS) because the technique generally employed is to collect protocols from human subjects, which are then analyzed to discover the heuristics these subjects employ. 8 * A program is then written which incorporates similar rules of thumb.

p 6

Soon, however, Simon gave way to more enthusiastic claims:

Subsequent work has tended to confirm [our] initial hunch, and to demonstrate that heuristics, or rules of thumb, form the integral core of human problemsolving processes. As we begin to understand the nature of the heuristics that people use in thinking the mystery begins to dissolve from such (heretofore) vaguely understood processes as “intuition’* and “judgment.” 11

But, as we have seen in the case of language translating, difficulties have an annoying way of reasserting themselves. This time, the “mystery” of judgment reappears in terms of the organizational aspect of the problem solving programs. Already in 1961 at the height of Simon’s enthusiasm, Minsky saw the difficulties which would attend the application of trial and-error techniques to really complex problems:

The simplest problems, e.g., playing tic-tac-toe or proving the very simplest theorems of logic, can be solved by simple recursive application of all the available transformations to all the situations that occur, dealing with sub-problems in the order of their generation. This becomes impractical in more complex problems as the search space grows larger and each trial becomes more expensive in time and effort. One can no longer afford a policy of simply leaving one unsuccessful attempt to go on to another.

This, Minsky claims, shows the need for a planning program, but as he goes on to point out: Planning methods . . . threaten to collapse when the fixed sets of categories adequate for simple problems have to be replaced by the expressions of descriptive language.

p 8

Public admission that GPS was a dead end, however, did not come until much later.

p 12


II. The Underlying Significance of Failure to Achieve Predicted Results

Negative results, provided one recognizes them as such, can be interesting. Diminishing achievement, instead of the predicted accelerating success, perhaps indicates some unexpected phenomenon. Perhaps we are pushing out on a continuum like that of velocity, where further acceleration costs more and more energy as we approach the speed of light, or perhaps we are instead facing a discontinuity, which requires not greater effort but entirely different techniques, as in the case of the tree-climbing man who tries to reach the moon

drucker right thing law

p 19

We have seen that Bar-Hillel and Oettinger, two of the most respected and best-informed workers in the field of automatic language translation, agree in their pessimistic conclusions concerning the possibility of further progress in the field. Each has realized that in order to translate a natural language, more is needed than a mechanical dictionary no matter how complete and the laws of grammar no matter how sophisticated.

p 20

Pascal already noted that the perceptive mind functions “tacitly, naturally, and without technical rules.” Wittgenstein has spelled out this insight in the case of language.

We are unable clearly to circumscribe the concepts we use; not because we don’t know their real definition, but because there is no real “definition” to them.

To suppose that there must be would be like supposing that whenever children play with a ball they play a game according to strict rules. 43 *

A natural language is used by people involved in situations in which they are pursuing certain goals. These extralinguistic goals, which need not themselves be precisely stated or statable, provide some of the cues which reduce the ambiguity of expressions as much as is necessary for the task at hand. 

idio jargon ness

p 22

What is involved in learning a language is much more complicated and more mysterious than the sort of conditioned reflex involved in learning to associate nonsense syllables. T

Wittgenstein suggests that the child must be engaged in a “form of life” in which he shares at least some of the goals and interests of the teacher, so that the activity at hand helps to delimit the possible reference of the words used. What, then, can be taught to a machine? This is precisely what is in question in one of the few serious objections to work in artificial intelligence made by one of the workers himself

p 23

Data are indeed put into a machine but in an entirely different way than children are taught

well.. naturally taught.. we do school much like you’d put into a machine

p 40

Summary. Human beings are able to recognize patterns under the following increasingly difficult conditions:

1. The pattern may be skewed, incomplete, deformed, and embedded in noise;

2. The traits required for recognition may be “so fine and so numerous” that, even if they could be formalized, a search through a branching list of such traits would soon become unmanageable as new patterns for discrimination were added;

3. The traits may depend upon external and internal context and are thus not amenable to context-free specification;

4. There may be no common traits but a “complicated network of overlapping similarities,” capable of assimilating ever new variations.

Any system which can equal human performance, must therefore, be able to

1. Distinguish the essential from the inessential features of a particular instance of a pattern;

2. Use cues which remain on the fringes of consciousness;

3. Take account of the context;

4. Perceive the individual as typical, i.e., situate the individual with respect to a paradigm case.

p 41


The basic problem facing workers attempting to use computers in the simulation of human intelligent behavior should now be clear: all alternatives must be made explicit.

Instead of these triumphs, an overall pattern has emerged: success with simple mechanical forms of information processing, great expectations, and then failure when confronted with more complicated forms of behavior.


Phase II (1962-1967) Semantic Information Processing

p 44

ANALYSIS OF A PROGRAM WHICH “UNDERSTANDS ENGLISH” BOBROW’S STUDENT Of the five semantic information processing programs collected in Minsky’s book, Daniel Bobrow’s STUDENT a program for solving algebra word problems is put forward as the most successful. It is, Minsky tells us, “a demonstration par excellence of the power of using meaning to solve linguistic problems.” 7 Indeed, Minsky devotes a great deal of his Scientific American article to Bobrow’s program and goes so far as to say that “it understands English.” 8


p 46

Choosing algebra problems also has another advantage: In natural language, the ambiguities arise not only from the variety of structural groupings the words could be given, but also from the variety of meanings that can be assigned to each individual word. In STUDENT the strong semantic constraint (that the sentences express algebraic relations between the designated entities) keeps the situation more or less under control.

What, then, has been achieved?

p 48

Once we have devised programs with a genuine capacity for self-improvement a rapid evolutionary process will begin. As the machine improves both itself and its model of itself, we shall begin to see all the phenomena associated with the terms “consciousness,” “intuition” and “intelligence” itself. It is hard to say how close we are to this threshold, but once it is crossed the world will not be the same.22 It is not as hard to say how close we are to this threshold as Minsky would like us to believe.

Why Bobrow and Minsky think, in the face of the peculiar restrictions necessary to the function of the program, that such a generalization must be possible is hard to understand. Nothing, I think, can justify or even explain their optimism concerning this admittedly limited and ad hoc approach.

p 55

Quillian proposes this program as “a reasonable view of how semantic information is organized within a person’s memory.” 41 He gives no argument to show that it is reasonable except that if a computer were to store semantic information, this would be a reasonable model for it. People, indeed, are not aware of going through any of the complex storage and retrieval process Quillian outlines, but this does not disturb Quillian, who, like his teacher, Simon, in similar trouble can always claim that these processes are nonetheless unconsciously taking place:

p 56

Quillian seems to have inherited Newell and Simon’s unquestioned assumption that human beings operate by heuristic programs.

But with a frankness, rare in the literature, Quillian also reports his disappointments:

p 57

Unfortunately, two years of work on this problem led to the conclusion that the task is much too difficult to execute at our present stage of knowledge. The processing that goes on in a person’s head when he “understands” a sentence and incorporates its meaning into his memory is very large indeed, practically all of it being done without his conscious knowledge. 4

These difficulties suggest that the model itself the idea that our understanding of a natural language involves building up a structured whole out of an enormous number of explicit parts may well be mistaken. Quillian’s work raises rather than resolves the question of storing the gigantic number of facts resulting from an analysis which has no place for perceptual gestalts. If this data structure grows too rapidly with the addition of new definitions, then Quillian’s work, far from being encouraging, would be a reductio ad absurdum of the whole computeroriented approach.

What would be reasonable to expect? Minsky estimates that Quillian’s program now contains a few hundred facts. He estimates that “a million facts would be necessary for great intelligence.” 49 H

p 58

Minsky claims in another of his books, that within a generation . . . few compartments of intellect will remain outside the machine’s realm the problem of creating “artificial intelligence” will be substantially solved. 51 Certainly there is nothing in Semantic Information Processing to justify this confidence. As we have seen, Minsky criticizes the early programs for their lack of generality. “Each program worked only on its restricted specialty, and there was no way to combine two different problemsolvers.”52 But Minsky’s solutions are as ad hoc as ever. Yet he adds jauntily: The programs described in this volume may still have this character, but they are no longer ignoring the problem. In fact, their chief concern is finding methods of solving it. 53

the real problem, the problem of how to structure and store the mass of data required has been put aside.

or is it that you can’t even collect enough/right data for thinking..

one could however.. perhaps.. collect enough data for connecting (hlb that io dance).. but not for thinking – that embodiment/indigenous piece.. that a machine can’t have/be

p 63

why do those working in Artificial Intelligence assume that there must be a digital way of performing human tasks? To my knowledge, no one in the field seems to have asked himself these questions. In fact, artificial intelligence is the least self-critical field on the scientific scene. There must be a reason why these intelligent men almost unanimously mimimize or fail to recognize their difficulties, and continue dogmatically to assert their faith in progress. Some force in their assumptions, clearly not their success, must allow them to ignore the need for justification. We must now try to discover why, in the face of increasing difficulties, workers in artificial intelligence show such untroubled confidence.

part 2


ch 3


from Roger:


Cognitive computing is not cognitive at all » Banking Technology bankingtech.com/829352/cogniti…

People learn from conversation and Google can’t have one.

what computers can’t do .. ai.. algo .. ness..