what computers can’t do

what computers can't do

what computers can’t do: a critique of artificial reason

book (1972)) by Hubert Dreyfus [adding notes/quotes as i re read – loc as page .. out of 5570 – start adding after wikipedia? quotes below]
1972 book *What Computers Can’t Do, revised first in 1979, and then again in 1992 with a new introduction as What Computers Still Can’t Do.

*Hubert Dreyfus has been a critic of artificial intelligence research since the 1960s. In a series of papers and books, including Alchemy and AI (1965), What Computers Can’t Do (1972; 1979; 1992) and Mind over Machine (1986), he presented a pessimistic assessment of AI’s progress and a critique of the philosophical foundations of the field. Dreyfus’ objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2003), the standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.

Dreyfus argued that human intelligence and expertise depend primarily on unconscious instincts rather than conscious symbolic manipulation, and that these unconscious skills could never be captured in formal rules.

His critique was based on the insights of modern continental philosophers such as Merleau-Ponty and Heidegger, and was directed at the first wave of AI research which used high level formal symbols to represent reality and tried to reduce intelligence to symbol manipulation.

embodiment (process of)

When Dreyfus’ ideas were first introduced in the mid-1960s, they were met with ridicule and outright hostility. By the 1980s, however, many of his perspectives were rediscovered by researchers working in robotics and the new field of connectionism—approaches now called “sub-symbolic” because they eschew early AI research’s emphasis on high level symbols. In the 21st century, statistics-based approaches to machine learning simulate the way that the brain uses unconscious instincts to perceive, notice anomalies and make quick judgements. These techniques are highly successful and are currently widely used in both industry and academia. Historian and AI researcher Daniel Crevier writes: “time has proven the accuracy and perceptiveness of some of Dreyfus’s comments.” Dreyfus said in 2007, “I figure I won and it’s over—they’ve given up

The grandiose promises of artificial intelligence

In Alchemy and AI (1965) and What Computers Can’t Do (1972), Dreyfus summarized the history of artificial intelligence and ridiculed the unbridled optimism that permeated the field. For example, Herbert A. Simon, following the success of his program General Problem Solver (1957), predicted that by 1967:

A computer would be world champion in chess.

A computer would discover and prove an important new mathematical theorem.

Most theories in psychology will take the form of computer programs.

The press reported these predictions in glowing reports of the imminent arrival of machine intelligence.

Dreyfus felt that this optimism was totally unwarranted. He believed that they were based on false assumptions about the nature of human intelligence. Pamela McCorduck explains Dreyfus position:

[A] great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.

These predictions were based on the success of an “information processing” model of the mind, articulated by Newell and Simon in their physical symbol systems hypothesis, and later expanded into a philosophical position known as computationalism by philosophers such as Jerry Fodor and Hilary Putnam. Believing that they had successfully simulated the essential process of human thought with simple programs, it seemed a short step to producing fully intelligent machines. However, Dreyfus argued that philosophy, especially 20th-century philosophy, had discovered serious problems with this information processing viewpoint. The mind, according to modern philosophy, is nothing like a computer.

Dreyfus’ four assumptions of artificial intelligence research

In Alchemy and AI and What Computers Can’t Do, Dreyfus identified four philosophical assumptions that supported the faith of early AI researchers that human intelligence depended on the manipulation of symbols. “In each case,” Dreyfus writes, “the assumption is taken by workers in [AI] as an axiom, guaranteeing results, whereas it is, in fact, one hypothesis among others, to be tested by the success of such work.”

The biological assumption
The brain processes information in discrete operations by way of some biological equivalent of on/off switches.

In the early days of research into neurology, scientists realized that neurons fire in all-or-nothing pulses. Several researchers, such as Walter Pitts and Warren McCulloch, argued that neurons functioned similar to the way Boolean logic gates operate, and so could be imitated by electronic circuitry at the level of the neuron. When digital computers became widely used in the early 50s, this argument was extended to suggest that the brain was a vast physical symbol system, manipulating the binary symbols of zero and one. Dreyfus was able to refute the biological assumption by citing research in neurology that suggested that the action and timing of neuron firing had analog components. To be fair, however, Daniel Crevier observes that “few still held that belief in the early 1970s, and nobody argued against Dreyfus” about the biological assumption.

The psychological assumption
The mind can be viewed as a device operating on bits of information according to formal rules.

He refuted this assumption by showing that much of what we “know” about the world consists of complex attitudes or tendencies that make us lean towards one interpretation over another. He argued that, even when we use explicit symbols, we are using them against an unconscious background of commonsense knowledge and that without this background our symbols cease to mean anything. This background, in Dreyfus’ view, was not implemented in individual brains as explicit individual symbols with explicit individual meanings.

The epistemological assumption
All knowledge can be formalized.

This concerns the philosophical issue of epistemology, or the study of knowledge. Even if we agree that the psychological assumption is false, AI researchers could still argue (as AI founder John McCarthy has) that it was possible for a symbol processing machine to represent all knowledge, regardless of whether human beings represented knowledge the same way. Dreyfus argued that there was no justification for this assumption, since so much of human knowledge was not symbolic.

The ontological assumption
The world consists of independent facts that can be represented by independent symbols

Dreyfus also identified a subtler assumption about the world. AI researchers (and futurists and science fiction writers) often assume that there is no limit to formal, scientific knowledge, because they assume that any phenomenon in the universe can be described by symbols or scientific theories. This assumes that everything that exists can be understood as objects, properties of objects, classes of objects, relations of objects, and so on: precisely those things that can be described by logic, language and mathematics. The question of what exists is called ontology, and so Dreyfus calls this the ontological assumption. If this is false, then it raises doubts about what we can ultimately know and what intelligent machines will ultimately be able to help us to do.

Knowing-how vs. knowing-that: the primacy of intuition

In Mind Over Machine (1986), written during the heyday of expert systems, Dreyfus analyzed the difference between human expertise and the programs that claimed to capture it. This expanded on ideas from What Computers Can’t Do, where he had made a similar argument criticizing the “cognitive simulation” school of AI research practiced by Allen Newell and Herbert A. Simon in the 1960s.

Dreyfus argued that human problem solving and expertise depend on our background sense of the context, of what is important and interesting given the situation, rather than on the process of searching through combinations of possibilities to find what we need. Dreyfus would describe it in 1986 as the difference between “knowing-that” and “knowing-how”, based on Heidegger’s distinction of present-at-hand and ready-to-hand.

Knowing-that is our conscious, step-by-step problem solving abilities. We use these skills when we encounter a difficult problem that requires us to stop, step back and search through ideas one at time. At moments like this, the ideas become very precise and simple: they become context free symbols, which we manipulate using logic and language. These are the skills that Newell and Simon had demonstrated with both psychological experiments and computer programs. Dreyfus agreed that their programs adequately imitated the skills he calls “knowing-that.”

Knowing-how, on the other hand, is the way we deal with things normally. We take actions without using conscious symbolic reasoning at all, as when we recognize a face, drive ourselves to work or find the right thing to say. We seem to simply jump to the appropriate response, without considering any alternatives. This is the essence of expertise, Dreyfus argued: when our intuitions have been trained to the point that we forget the rules and simply “size up the situation” and react.

The human sense of the situation, according to Dreyfus, is based on our goals, our bodies and our culture—all of our unconscious intuitions, attitudes and knowledge about the world. This “context” or “background” (related to Heidegger’s Dasein) is a form of knowledge that is not stored in our brains symbolically, but intuitively in some way. It affects what we notice and what we don’t notice, what we expect and what possibilities we don’t consider: we discriminate between what is essential and inessential. The things that are inessential are relegated to our “fringe consciousness” (borrowing a phrase from William James): the millions of things we’re aware of, but we’re not really thinking about right now.

Dreyfus does not believe that AI programs, as they were implemented in the 70s and 80s, could capture this “background” or do the kind of fast problem solving that it allows. He argued that our unconscious knowledge could never be captured symbolically. If AI could not find a way to address these issues, then it was doomed to failure, an exercise in “tree climbing with one’s eyes on the moon.”

History

Dreyfus began to formulate his critique in the early 1960s while he was a professor at MIT, then a hotbed of artificial intelligence research. His first publication on the subject is a half-page objection to a talk given by Herbert A. Simon in the spring of 1961. Dreyfus was especially bothered, as a philosopher, that AI researchers seemed to believe they were on the verge of solving many long standing philosophical problems within a few years, using computers.

Alchemy and AI

In 1965, Dreyfus was hired (with his brother Stuart Dreyfus’ help) by Paul Armer to spend the summer at RAND Corporation’s Santa Monica facility, where he would write Alchemy and AI, the first salvo of his attack. Armer had thought he was hiring an impartial critic and was surprised when Dreyfus produced a scathing paper intended to demolish the foundations of the field. (Armer stated he was unaware of Dreyfus’ previous publication.) Armer delayed publishing it, but ultimately realized that “just because it came to a conclusion you didn’t like was no reason not to publish it.” It finally came out as RAND Memo and soon became a best seller.

The paper flatly ridiculed AI research, comparing it to alchemy: a misguided attempt to change metals to gold based on a theoretical foundation that was no more than mythology and wishful thinking. It ridiculed the grandiose predictions of leading AI researchers, predicting that there were limits beyond which AI would not progress and intimating that those limits would be reached soon.

Reaction

The paper “caused an uproar”, according to Pamela McCorduck. The AI community’s response was derisive and personal. Seymour Papert dismissed one third of the paper as “gossip” and claimed that every quotation was deliberately taken out of context. Herbert A. Simon accused Dreyfus of playing “politics” so that he could attach the prestigious RAND name to his ideas. Simon said, “what I resent about this was the RAND name attached to that garbage”.

Dreyfus, who taught at MIT, remembers that his colleagues working in AI “dared not be seen having lunch with me.” Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus’ positions, he recalls “I became the only member of the AI community to be seen eating lunch with Dreyfus. And I deliberately made it plain that theirs was not the way to treat a human being.”

The paper was the subject of a short in The New Yorker magazine on June 11, 1966. The piece mentioned Dreyfus’ contention that, while computers may be able to play checkers, no computer could yet play a decent game of chess. It reported with wry humor (as Dreyfus had) about the victory of a ten-year-old over the leading chess program, with “even more than its usual smugness.”

In hope of restoring AI’s reputation, Seymour Papert arranged a chess match between Dreyfus and Richard Greenblatt’s Mac Hack program. Dreyfus lost, much to Papert’s satisfaction. An Association for Computing Machinery bulletin used the headline:

“A Ten Year Old Can Beat the Machine— Dreyfus: But the Machine Can Beat Dreyfus[28]

Dreyfus complained in print that he hadn’t said a computer will never play chess, to which Herbert A. Simon replied: “You should recognize that some of those who are bitten by your sharp-toothed prose are likely, in their human weakness, to bite back … may I be so bold as to suggest that you could well begin the cooling—a recovery of your sense of humor being a good first step.”

Vindicated

By the early 1990s several of Dreyfus’ radical opinions had become mainstream.

Failed predictions. As Dreyfus had foreseen, the grandiose predictions of early AI researchers failed to come true. Fully intelligent machines (now known as “strong AI”) did not appear in the mid-1970s as predicted. HAL 9000 (whose capabilities for natural language, perception and problem solving were based on the advice and opinions of Marvin Minsky) did not appear in the year 2001. “AI researchers”, writes Nicolas Fearn, “clearly have some explaining to do.”[30]Today researchers are far more reluctant to make the kind of predictions that were made in the early days. (Although some futurists, such as Ray Kurzweil, are still given to the same kind of optimism.)

The biological assumption, although common in the forties and early fifties, was no longer assumed by most AI researchers by the time Dreyfus published What Computers Can’t Do. Although many still argue that it is essential to reverse-engineer the brain by simulating the action of neurons (such as Ray Kurzweil or Jeff Hawkins), they don’t assume that neurons are essentially digital, but rather that the action of analog neurons can be simulated by digital machines to a reasonable level of accuracy. (Alan Turing had made this same observation as early as 1950.)

The psychological assumption and unconscious skills. Many AI researchers have come to agree that human reasoning does not consist primarily of high-level symbol manipulation. In fact, since Dreyfus first published his critiques in the 60s, AI research in general has moved away from high level symbol manipulation or “GOFAI”, towards new models that are intended to capture more of our unconscious reasoning. Daniel Crevier writes that by 1993, unlike 1965, AI researchers “no longer made the psychological assumption”, and had continued forward without it.

In the 1980s, these new “sub-symbolic” approaches included:

  • Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning. Dreyfus himself agrees that these sub-symbolic methods can capture the kind of “tendencies” and “attitudes” that he considers essential for intelligence and expertise.[34]
  • Research into commonsense knowledge has focussed on reproducing the “background” or context of knowledge.
  • Robotics researchers like Hans Moravec and Rodney Brooks were among the first to realize that unconscious skills would prove to be the most difficult to reverse engineer. (See Moravec’s paradox.) Brooks would spearhead a movement in the late 80s that took direct aim at the use of high-level symbols, called Nouvelle AI. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention.

In the 1990s and the early decades of the 21st century, statistics-based approaches to machine learning used techniques related to economics and statistics to allow machines to “guess” – to make inexact, probabilistic decisions and predictions based on experience and learning. These programs simulate the way our unconscious instincts are able perceive, notice anomalies and make quick judgements, similar to what Dreyfus called “sizing up the situation and reacting”, but here the “situation” consists of vast amounts of numerical data. These techniques are highly successful and are currently widely used in both industry and academia.

This research has gone forward without any direct connection to Dreyfus’ work.

Knowing-how and knowing-that. Research in psychology and economics has been able to show that Dreyfus’ (and Heidegger’s) speculation about the nature of human problem solving was essentially correct. Daniel Kahnemann and Amos Tversky collected a vast amount of hard evidence that human beings use two very different methods to solve problems, which they named “system 1” and “system 2”. System one, also known as the adaptive unconscious, is fast, intuitive and unconscious. System 2 is slow, logical and deliberate. Their research was collected in the book Thinking, Fast and Slow, and inspired Malcolm Gladwell‘s popular book Blink. As with AI, this research was entirely independent of both Dreyfus and Heidegger.

Ignored

Although clearly AI research has come to agree with Dreyfus, McCorduck claimed that “my impression is that this progress has taken place piecemeal and in response to tough given problems, and owes nothing to Dreyfus.”

The AI community, with a few exceptions, chose not to respond to Dreyfus directly. “He’s too silly to take seriously” a researcher told Pamela McCorduck. Marvin Minsky said of Dreyfus (and the other critiques coming from philosophy) that “they misunderstand, and should be ignored.” When Dreyfus expanded Alchemy and AI to book length and published it as What Computers Can’t Do in 1972, no one from the AI community chose to respond (with the exception of a few critical reviews). McCorduck asks “If Dreyfus is so wrong-headed, why haven’t the artificial intelligence people made more effort to contradict him?”

Part of the problem was the kind of philosophy that Dreyfus used in his critique. Dreyfus was an expert in modern European philosophers (like Heidegger and Merleau-Ponty). AI researchers of the 1960s, by contrast, based their understanding of the human mind on engineering principles and efficient problem solving techniques related to management science. On a fundamental level, they spoke a different language. Edward Feigenbaum complained, “What does he offer us? Phenomenology! That ball of fluff. That cotton candy!” In 1965, there was simply too huge a gap between European philosophy and artificial intelligence, a gap that has since been filled by cognitive science, connectionism and robotics research. It would take many years before artificial intelligence researchers were able to address the issues that were important to continental philosophy, such as situatedness, embodiment, perception and gestalt.

Another problem was that he claimed (or seemed to claim) that AI would never be able to capture the human ability to understand context, situation or purpose in the form of rules. But (as Peter Norvig and Stuart Russell would later explain), an argument of this form can not be won: just because one can not imagine formal rules that govern human intelligence and expertise, this does not mean that no such rules exist. They quote Alan Turing’s answer to all arguments similar to Dreyfus’:

“we cannot so easily convince ourselves of the absence of complete laws of behaviour … The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, ‘We have searched enough. There are no such laws.'”

Dreyfus did not anticipate that AI researchers would realize their mistake and begin to work towards new solutions, moving away from the symbolic methods that Dreyfus criticized. In 1965, he did not imagine that such programs would one day be created, so he claimed AI was impossible. In 1965, AI researchers did not imagine that such programs were necessary, so they claimed AI was almost complete. Both were wrong.

A more serious issue was the impression that Dreyfus’ critique was incorrigibly hostile. McCorduck wrote, “His derisiveness has been so provoking that he has estranged anyone he might have enlightened. And that’s a pity.” Daniel Crevier stated that “time has proven the accuracy and perceptiveness of some of Dreyfus’s comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier.”

notes/quotes:

[2nd number is re read’s location.. the ‘loc’ prior to number means it was from first read]
loc 26/20 (kindle)
the diff between the mathematical mind {esprit de geometrie) and the perceptive mind {esprit de finesse): the reason the mathematicians are not perceptive is that they do not see what is before them, and that, accustomed to the exact and plain principles of mathematics, and not reasoning till they have well inspected and arranged their principles, they are lost in matters of perception where the principles do not allow for such arrangement... these principles are so find and so numerous that a very delicate and very clear sense is needed to perceive them, and to judge rightly and justly when they are perceived, without for the most part being able to demonstrate them in order as in mathematics; because the principles are not known to us in the same way , and because it would be an endless matter to undertake it. we must see the matter at once, at one glance, and not by a process of reasoning, at least to a certain degree.. mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous.. the mind.. does it tacitly, naturally, and without technical rules. – pascal pensees
pascal lit & num as colonialism
61

preface 

by Anthony Oettinger (https://en.wikipedia.org/wiki/Anthony_Oettinger) dreyfus serves all of us in venturing into an arcane technical field as a critical layman a professional philosopher committed to questioning and analyzing the foundations of knowledge dreyfus raises important and fundamental questions. one might therefore expect the targets of his criticism to react w greater human intelligence than when they simply shouted loud in response to his earlier sallies.. *the issues deserve serious public debate. they are too scientific to be left to philosophers and too philosophical to be left to scientists.. dreyfus sees agonizingly **slow progress in all fundamental work on ai
*maybe public debate is part of what’s killing/distracting us.. hence the **slow progress.. even though perhaps matters little when focused on wrong ai.. ie: augmenting interconnectedness rather than ai.. mufleh humanity law et al
77 he sees ai as limited by its assumption that the world is explicable in terms of elementary atomistic concepts, in a tradition traceable back to the greeks.. this insight challenges not only contemporary science and tech but also some of he foundations of western philosophy.. loc 70/77 he puts in question the basic role that rules play in accepted ideas of what constitutes a satisfactory scientific explanation
loc 84/77 he is too modern to ask his questions from a viewpoint that assumes that man and mind are somehow set apart from the physical universe and there fore not within reach of science… quite to the contrary, he state explicitly his assumption that ‘there is no reason why, in principle, one could not construct an artificial embodied agent if one used components sufficiently like those which make up a human being’.. instead, he points out that his questions are ‘philosophically interesting only if we restrict ourselves to asking if one can make such a robot by using a digital computer’.. curious enough..dreyfus’s own philosophical arguments lead him to see digital computers as limited not so much be being mindless as by having no body.….
embodiment (process of)
loc 84/91 the enormous and admitted difficulty of ai in determining what is relevant when the environ presented to a digital computer has not, in some way, been artificially constrained.. the central statement of this theme is that ‘ a person experiences the objects of the world as already interrelated and full of meaning…there is no justification for the assumption that we first experience isolated facts/snapshots of facts or momentary views of snapshots of isolated facts and then give them significance… this is the point that contemporary philosophers such as heidegger and wittgenstein are trying to make.. this, dreyfus argues following merleau-ponty, is a consequence of our having bodies capable of an ongoing but unanalyzed mastery of their environment
hmartin heidegger.. maurice merleau-ponty..
91 to the computer scientist concerned w progress in his specialty and w deeper understanding of the world, dreyfus presents a profound challenge to the widespread idea that ‘knowledge consists of a large store of neural data’.. dreyfus clearly is not neutral loc 98/105

acknowledgements 

seymour papert included
seymour papert
loc 113/105

intro

Since the Greeks invented logic and geometry, the idea that all reasoning might be reduced to some kind of calculation so that all arguments could be settled *once and for all has fascinated most of the Western tradition’s rigorous thinkers.
but.. *shaw communication law
Socrates was the first to give voice to this vision. The story of artificial intelligence might well begin around 450 B.C. when (according to Plato) Socrates demands of Euthyphro, a fellow Athenian who, in the name of piety, is about to turn in his own father for murder: “I want to know what is characteristic of piety which makes all actions pious . . . that I may have it to turn to, and to use as a standard whereby to judge your actions and those of other men. >M Socrates 121 socrates is asking Euthyphro for what modern computer theorists would call an “effective procedure,”

“a set of rules which tells us, from moment to moment, precisely how to behave.’

socrates supposed to law et al
Plato generalized this demand for moral certainty into an epistemological demand. According to Plato, all knowledge must be stateable in explicit definitions which anyone could apply. If one could not state his know-how in terms of such explicit instructions if his knowing how could not be converted into knowing that it was not knowledge but mere belief. According to Plato, cooks, for example, who proceed by taste and intuition, and poets who work from inspiration, have no knowledge: what they do does not involve understanding and cannot be understood. More generally, what cannot be stated explicitly in precise instructions all areas of human thought which require skill, intuition, or a sense of tradition are relegated to some kind of arbitrary fumbling. but plato was not yet fully a cyberneticist (although according to norbert wiener he was the first to use the term).. for plato was looking for semantic rather than syntactic criteria.. his rules presupposed that the person understood the meanings of the constitutive terms..
norbert wienerhuman use of human beings semantic: relating to meaning in language or logic.. For example, in everyday use, a child might make use of semantics to understand a mom’s directive to “do your chores” as, “do your chores whenever you feel like it.” However, the mother was probably saying, “do your chores right now.” syntactic: relating to the rules of language. An example of something syntactic is a sentence that uses the correct form of a verb; syntactic sentence. adjective.
136 Thus Plato admits his instructions cannot be completely formalized. Similarly, a modern computer expert, Marvin Minsky, notes, after tentatively presenting a Platonic notion of effective procedure: “This attempt at definition is subject to the criticism that the *interpretation of the rules is left to depend on some person or agent.”
marvin minsky.. *interpretation ness as red flag et al
aristotle: Yet it is not easy to find a formula by which we may determine how far and up to what point a man may go wrong before he incurs blame. But this *difficulty of definition is inherent in every object of perception; such questions of degree are bound up with the circumstances of the individual case, where our only criterion is the perception.
*like from manish jain‘s parrot’s training p8: many around the world are realizing.. why in naming the colour, we blind the eye defining/label(s) ness.. as the death of us.. killing the organism as fractal ness naming the colour ness
for the platonic project to reach fulfillment one beakthru is required: all appeal to intuition and judgement must be eliminated.. as galileo discovered that one could find *a pure formalism for describing physical motion by ignoring secondary qualities and teleological considerations, so, one might suppose, a galileo of human behavior might succeed in reducing all semantic considerations (appeal to meanings) to the techniques of syntactic (formal) manipulation
*by blinding/killing et al
The belief that such a total formalization of knowledge must be possible soon came to dominate Western thought.
151 Hobbes was the first to make explicit the syntactic conception of thought as calculation: “When a man reasons, he does nothing else but conceive a sum total from addition of parcels,” he wrote, “for REASON … is nothing but reckoning. . . .” it only remained to work out the univocal parcels or ‘bits’ w which this purely syntactic calculator could operation; *Leibniz, the inventor of the binary system, dedicated himself to working out the necessary unambiguous formal language..
*language as control/enclosure
leibniz thought he had found a universal and exact system of notation, an algebra, a symbolic language, a ‘universal characteristic’ by means of which ‘we can assign to every object its determined characteristic number’.. in this way  all concepts could be analyzed into a small number of original and undefined ideas; ..all knowledge could be expressed and brought together in one deductive system. On the basis of these numbers and the rules for their combination all problems could be solved and all controversies ended: ‘if someone would doubt my results’ leibniz said, ‘i would say to him: ‘let us calculate, sir’ and thus by taking pen/ink, we should settle the question.’
oi.. of math and men ness
like a computer .. program about to be written.. leibniz claims: since, however, the wonderful interrelatedness of all things makes it extremely difficult to formulate explicitly the characteristic numbers of individual things, i have invented an elegant artifice by virtue of which certain relations may be rep’d and fixed numerically and which may thus then be further determined in numerical calculation
literacy and numeracy both elements of colonialism/control/enclosure.. we need to calculate differently and stop measuring things
165 once the characteristic numbers are established for most concepts, mankind will then possess a new instrument which will enhance the capabilities of the mind to far greater extent than optical instruments strengthen the eyes, and will supersede the microscope and telescope to the same extent that reason is superior to eyesight.. w this powerful new tool, the skills which plato could not formalize, and so treated as confused thrashing around, could be recuperated as theory..
oi.. blinding us ness..
In one of his “grant proposals” his explanations of how he could

reduce all thought to the manipulation of numbers

if he had money enough and time
exact same time i’m reading this.. from Rob:

“we’re trying to do to writing and other language arts what we’ve already done to mathematics” medium.com/@hhschiaravall…

“We’re trying to turn something rich and interconnected into something discrete, objective and measurable.”

measuring things ness.. we need to let go of any form of m\a\p fitting too with Benjamin‘s latest on ai and the new normal ..
Leibniz remarks: … Leibniz had only promises, but in the work of George Boole, a mathematician and logician working in the early nineteenth century, his program came one step nearer to reality. Like Hobbes, Boole

supposed that reasoning was calculating,

and he set out to “investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a Calculus.. . .””
dang.. again.. we need to calculate differently and stop measuring things
181 Boolean algebra is a binary algebra for representing elementary logical functions. If “a” and “fr” represent variables, “.” represents “and,” ” + ” represents “or,” and “1” and “0” represent “true” and “false” respectively, then the rules governing logical manipulation can be written in algebraic form as follows: (id theorems – additive, multiplicative et al)
binary ness.. killing us
Western man was now ready to begin the calculation. Almost immediately, in the designs of Charles Babbage (1835), practice began to catch up to theory. Babbage designed what he called an “Analytic Engine” which, though never built, was to function exactly like a modern digital computer, using punched cards, combining logical and arithmetic operations, and *making logical decisions along the way based upon the results of its previous computations.
*decision making is unmooring us law
An important feature of Babbage’s machine was that it was digital. There are two fundamental types of computing machines: analogue and digital. Analogue computers do not compute in the strict sense of the word. They operate by measuring the magnitude of physical quantities. Using physical quantities, such as voltage, duration, angle of rotation of a disk, and so forth, proportional to the quantity to be manipulated, they combine these quantities in a physical way and measure the result. A slide rule is a typical analogue computer. A digital computer as the word digit, Latin for “finger,” implies represents all quantities by discrete states, for example, relays which are open or closed, a dial which can assume any one often positions, and so on, and then literally counts in order to get its result. 197 Thus, whereas analogue computers operate with continuous quantities, all digital computers are discrete state machines. As A. M. Turing, famous for defining the essence of a digital computer, puts it: [Discrete state machines] move by sudden jumps or clicks from one quite definite state to another. These states are sufficiently different for the possibility of confusion between them to be ignored. Strictly speaking there are no such machines. Everything really moves continuously. But there are many kinds of machines which can profitably be thought of as being discrete state machines. For instance in considering the switches for a lighting system it is a convenient fiction that each switch must be definitely on or definitely off. There must be intermediate positions, but for most purposes we can forget about them.
yeah.. and forget about aliveness.. aka: organism as fractal et al
Babbage’s ideas were too advanced for the technology of his time, for there was no quick efficient way to represent and manipulate the digits. He had to use awkward mechanical means, such as the position of cogwheels, to represent the discrete states. electric switches.. however.. provided the necessary tech breakthru..
off/on switch.. binary ness
..since a digital computer operates with abstract symbols which can stand for anything, and logical operations which can relate anything to anything, any digital computer (unlike an analogue computer) is a universal machine. First, as Turing puts it, it can simulate any other digital computer. 212 This special property of digital computers, that they can mimic any discrete state machine, is described by saying that they are universal machines.The existence of machines with this property has the important consequence that, considerations of speed apart, it is unnecessary to design various new machines to do various computing processes. They can all be done with one digital computer, suitably programmed for each case. It will be seen that as a consequence of this all digital computers are in a sense equivalent. Second, and philosophically more significant, any process which can be formalized so that it can be represented as series of instructions for the manipulation of discrete elements, can, at least in principle, be reproduced by such a machine. Thus even an analogue computer, provided that the relation of its input to its output can be described by a precise mathematical function, can be simulated on a digital machine
so have to ignore the inbetween ness.. the aliveness.. the entropy et al

Moreover, the rules were built into the circuits of the machine.Once the machine was programmed there was no need for interpretation; no appeal to human intuition and judgment.

This was just what Hobbes and Leibniz had ordered, and Martin Heidegger appropriately saw in cybernetics the culmination of the philosophical tradition. thus while practical men like eckert and mauchly, were designing the first electronic digital machine, theorists, such as turning, trying to understand the essence and capacity of such machines, became interested in an area which had thus far been the province of philosophers: the nature of reason itself..
alan turing
loc 242/252 Turing had grasped the possibility and provided the criterion for success, but his article ended with only the sketchiest suggestions about what to do next: We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision…1\the playing of chess, would be best….2\understand and speak English. This process could follow the normal teaching of a child. .. Again I do not know what the right answer is, but I think both approaches should be tried. 257 A technique was still needed for finding the rules which thinkers from Plato to Turing assumed must exist a technique for converting any practical activity such as playing chess or learning a language into the set of instructionsLeibniz called a theory. 281 What was needed were rules for converting any sort of intelligent activity into a set of instructions. At this point Herbert Simon and Allen Newell, analyzing the way a student proceeded to solve logic problems, .. so-called algorithmic programs which follow an exhaustive method to arrive at a solution, but which rapidly become unwieldy when dealing with practical problems. This notion of a rule of practice provided a breakthrough 295 hence, we are not interested in methods that guarantee solutions, but which require vast amounts of computation. rather, we wish to understand how a mathematician for ie, is able to prove a theorem even though he doe snot know when he starts how, or if, he is going to succeed.. But Newell and Simon soon realized that even this approach was not general enough.  310 All these kinds of information are heuristics things that aid discovery. Heuristics seldom provide infallible guidance. . . . Often they “work,” but the results are variable and success is seldom guaranteed. on gps (newell and simon’s general problem solver): this means-end system of heuristic assumes the following: 1\ if object given is not desired one, differences will be detectable between available and desired object  2\ operators affect some features of their operands and leave other unchanged.. hence operators characterized by changes they produce..   3\ some differences will prove more difficult to affect..
sounds like beginnings of rapid perpetuations of spinach or rock ness.. of decision making is unmooring us ness..
325 it seemed that finally philosophical ambition had found the necessary technology: that the universal, high-speed computer had been given the rules for converting reasoning into reckoning. Simon and Newell sensed the importance of the moment and jubilantly announced that the era of intelligent machines was at hand….In short, we now have the elements of a theory of heuristic (as contrasted with algorithmic) problem solving; and we can use this theory both to understand human heuristic processes and to simulate such processes with digital computers. Intuition, insight, and learning are no longer exclusive possessions of humans: any large high-speed computer can be programmed to exhibit them also.
oi
 This field of research, dedicated to using digital computers to simulate intelligent behavior, soon came to be known as “artificial intelligence.”.. But the term “artificial” does not mean that workers in artificial intelligence are trying to build an artificial man. ..Likewise, the term “intelligence” can be misleading. No one expects the resulting robot to reproduce everything that counts as intelligent behavior in human beings.
all red flags
340

These last metaphysicians are staking everything on man’s ability to formalize his behavior; to bypass brain and body, and arrive, all the more surely, at the essence of rationality.

oi.. behavior ness as red flag.. then too.. observing whales/whalespeak for that behaavior ness.. not wonder it ‘worked’
Computers have already brought about a technological revolution comparable to the Industrial Revolution. .. everyone senses the importance of this revolution.. but we are so near the events that it is difficult to discern their significance.
can’t discern because 1\ legit alive ness not discernable  and  2\ observing whales
354 This much, however, is clear. Aristotle defined man as a rational animal, and since then reason has been held to be of the essence of man. ..if reason can be programmed into a computer, this will confirm an understanding of the nature of man, which Western thinkers have been groping toward for two thousand years but which they only now have the tools to express and implement. The incarnation of this intuition will drastically change our understanding of ourselves. If, on the other hand, artificial intelligence should turn out to be impossible, then we will have to distinguish human from artificial reason, and this too will radically change our view of ourselves. ..
understanding ness.. name the colour ness (136 parrots ref).. already changes/kills our essence
we must try to understand to what extend ai is possible
intellect ness is already artificial being/fittingness
what we learn about the limits of intelligence in computers will tell us something about the character and extent of human intelligence
rather of whale programmed ‘intelligence
‘ii The need for a critique of artificial reason is a special case of a general need for critical caution in the behavioral sciences. Chomsky remarks that in these sciences “there has been a natural but unfortunate tendency to ‘extrapolate,’ from the thimbleful of knowledge that has been attained in careful experimental work and rigorous data-processing, to issues of much wider significance and of great social concern.” He concludes that the experts have the responsibility of making clear the actual limits of their understanding and of the results they have so far achieved. A careful analysis of these limits will demonstrate that in virtually every domain of the social and behavioral sciences the results achieved to date will not support such “extrapolation.”  Artificial intelligence, at first glance, seems to be a happy exception to this pessimistic principle.
chomsky serious things law – beyond his thinking – ie: augmenting interconnectedness rather than artificial intellect ness et al
397 before long, we may learn how to set them to work upon the very special problem of improving their own capacity to solve problems
too bad that’s not the issue/essence of human being.. ie: solving problems.. esp whale problems
Thinkers in both camps have failed to ask the preliminary question whether machines can in fact exhibit even elementary skills like playing games, solving simple problems, reading simple sentences and recognizing patterns, presumably because they are under the impression, fostered by the press and artificial-intelligence researchers such as Minsky, that the simple tasks and even some of the most difficult ones have already been or are about to be accomplished. To begin with, then, these claims must be examined. 411 Unfortunately, the tenth anniversary of this historic talk went unnoticed, and workers in artificial intelligence did not, at any of their many national and international meetings, take time out from their progress reports to confront these predictions with the actual achievements. Now fourteen years have passed, and we are being warned that it may soon be difficult to control our robots. It is certainly high time to measure this original prophecy against reality. 468 It is essential to be aware at the outset that despite predictions, press releases, films, and warnings, artificial intelligence is a *promise and not an accomplished fact. Only then can we begin our examination of the actual state and future hopes of artificial intelligence at a sufficiently rudimentary level
rather a *cancer
The field of artificial intelligence has many divisions and subdivisions, but the most important work can be classified into four areas: game playing, language translating, problem solving, and pattern recognition
all red flags
482 but these prejudices are so deeply rooted in our thinking that the only alt to them seems to be an obscurantist reject of the possibility of a science of human behavior
yeah.. let’s do that
496 If the order of argument presented above and the tone of my opening remarks seem strangely polemical for an effort in philosophical analysis, I can only point out that, as we have already seen, artificial intelligence is a field in which the rhetorical presentation of results often substitutes for research, so that research papers resemble more a debater’s brief than a scientific report. Such persuasive marshaling of facts can only be answered in kind. Thus the accusatory tone of Part I. In Part II, however, I have tried to remain as objective as possible in testing fundamental assumptions, although I know from experience that challenging these assumptions will produce reactions similar to those of an insecure believer when his faith is challenged. 510 seymour pappert of mit responded: ‘i protest vehemently against crediting dreyfus w any good.. to state that you can associate yourself w one of his conclusion is unprincipled.. dreyfus concept of coupling men w machines is based on through misunderstanding of the problems and has nothing in common ew any good statement that might go by the same words
oi seymour papert am thinking augmenting ness vs artificial ness
However, in anticipation of the impending outrage I want to make absolutely clear from the outset that what I am criticizing is the implicit and explicit philosophical assumptions of Simon and Minsky and their co-workers, not their technical work. 525 An artifact could replace men in some tasks for example, those involved in exploring planets without performing the way human beings would and without exhibiting human flexibility. Research in this area is not wasted or foolish, although a balanced view of what can and cannot be expected of such an artifact would certainly be aided by a little philosophical perspective.

part 1

TEN YEARS OF RESEARCH IN ARTIFICIAL INTELLIGENCE (1957-1967) p 45 (pdf)/528
not sure what/where pdf is.. so will just keep on adding kindle loc numbers as i re read
Phase I (1957-1962) Cognitive Simulation I. Analysis of Work in Language Translation, Problem Solving, and Pattern Recognition
1\ language as control/enclosure.. whalespeak.. begs ie: idiosyncratic jargon ness..  2\ work as solving other people’s problems.. taleb center of problem law.. 3\ whale patterns.. rather than ie: organism as fractal ness
language translation p 3 Anthony Oettinger, the first to produce a mechanical dictionary (1954), recalls the climate of these early days: “The notion of . . . fully automatic high quality mechanical translation, planted by overzealous propagandists for automatic translation on both sides of the Iron Curtain and nurtured by the wishful thinking of potential users, blossomed like a vigorous weed.” This initial enthusiasm and the subsequent disillusionment provide a sort of paradigm for the field. p 4/545 It was not sufficiently realized that the gap between such output . . . and high quality translation proper was still enormous, and that the problems solved until then were indeed many but just the simplest ones whereas the “few” remaining problems were the harder ones very hard indeed. p 5/561 problem solving Much of the early work in the general area of artificial intelligence, especially work on game playing and problem solving, was inspired and dominated by the work of Newell, Shaw, and Simon at the RAND Corporation and at Carnegie Institute of Technology. 7 Their approach is called Cognitive Simulation (CS) because the technique generally employed is to collect protocols from human subjects, which are then analyzed to discover the heuristics these subjects employ. 8 * A program is then written which incorporates similar rules of thumb. p 6 Soon, however, Simon gave way to more enthusiastic claims: Subsequent work has tended to confirm [our] initial hunch, and to demonstrate that heuristics, or rules of thumb, form the integral core of human problemsolving processes. As we begin to understand the nature of the heuristics that people use in thinking the mystery begins to dissolve from such (heretofore) vaguely understood processes as “intuition’* and “judgment.” 591 But, as we have seen in the case of language translating, difficulties have an annoying way of reasserting themselves. This time, the “mystery” of judgment reappears in terms of the organizational aspect of the problem solving programs. Already in 1961 at the height of Simon’s enthusiasm, Minsky saw the difficulties which would attend the application of trial and-error techniques to really complex problems: The simplest problems, e.g., playing tic-tac-toe or proving the very simplest theorems of logic, can be solved by simple recursive application of all the available transformations to all the situations that occur, dealing with sub-problems in the order of their generation. This becomes impractical in more complex problems as the search space grows larger and each trial becomes more expensive in time and effort. One can no longer afford a policy of simply leaving one unsuccessful attempt to go on to another. for each attempt on a difficult problem will involve so much effort that one must be quite sure that whatever the outcome , the effort will not be wasted entirely.. one must become selective to the point that no trial is made w/o a compelling reason
graeber grant law.. graeber min\max law
This, Minsky claims, shows the need for a planning program, but as he goes on to point out: Planning methods . . . threaten to collapse when the fixed sets of categories adequate for simple problems have to be replaced by the expressions of descriptive language.
no prep.. no train.. no agenda..
p 8/629 Public admission that GPS was a dead end, however, did not come until much later. in 1967.. simon/newell announce gps being abandoned.. that gps has collapsed under weight of its own organization becomes clearer later in the monograph..
and we keep doing that.. we need an org/infra that doesn’t collapse itself  ie: as infra
one serious limitation of the expected performance of gps is the size of the program and the size of its rather elab data structure.. the program itself occupies significant portion of the computer memory and the generation of new data structures during problem solving quickly exhausts the remaining memory.. thus gps is only designed to solve modest problems whose rep is not too elab..
#1 difficulty: wrong/non-legit/whale data..
646 pattern recognition a man is continually exposed to a welter of data from his senses, and abstracts from it the patterns relevant to his activity at the moment. his ability to solve problems, prove theorems and generally run his life depends on this type of perception
yeah.. to ‘run’ his whale life.. for human being ness.. irrelevant/cancerous
661 these all operate by searching for predetermined topological features of the characters to be recognized, and checking these features against present or learned ‘definitions’ of each letter in terms of these traits.. the trick is to find relevant features, that is, those that remain generally invariant throughout variations of size/orientation, and other distortions..
which would be none.. ie: no relevant features of aliveness/organism as fractalness.. if predetermining/checking.. if any form of m\a\p
but none of these programs constitutes a breakthru in pattern recognition.. each is a small engineering triumph, and ad hoc solution of a specific problem.. w/o general applicability
specific whale problems.. that we keep suckingout our energy on.. w no legit applicability to alive ness.. to legit needs..  et al
675 but as long as recognition depends on a limited set of features, whether ad hoc or general, preprogrammed or generated, mechanical recognition has gone about as far as it can go
still today.. that’s all we’re doing.. we need to let go of any form of m\a\p
704 alas.. i feel that many of he hoped or objectives may well be porcelain eggs; they will never hatch, no matter how long heat is applied to them, because they require pattern discovery purely on the part of machines working alone.. the tasks of discovery demand human qualities
and.. taleb center of problem law
conclusion rather than climbing blindly, it is better to look where one is going. it is time to study in detail the specific problems confronting work in ai and the underlying difficulties that they reveal..
rather.. let’s spend our energies augmenting interconnectedness
 p 12/719 ii II. The Underlying Significance of Failure to Achieve Predicted Results Negative results, provided one recognizes them as such, can be interesting. Diminishing achievement, instead of the predicted accelerating success, perhaps indicates some unexpected phenomenon. Perhaps we are pushing out on a continuum like that of velocity, where further acceleration costs more and more energy as we approach the speed of light, or perhaps we are instead facing a discontinuity, which requires not greater effort but entirely different techniques, as in the case of the tree-climbing man who tries to reach the moon
yeah that.. drucker right thing law.. et al
p 19/853 We have seen that Bar-Hillel and Oettinger, two of the most respected and best-informed workers in the field of automatic language translation, agree in their pessimistic conclusions concerning the possibility of further progress in the field. Each has realized that in order to translate a natural language, more is needed than a mechanical dictionary no matter how complete and the laws of grammar no matter how sophisticated. p 20/867 .. our use of language, while precise, is not strictly rulelike.. Pascal already noted that the perceptive mind functions “tacitly, naturally, and without technical rules.” Wittgenstein has spelled out this insight in the case of language. 882

We are unable clearly to circumscribe the concepts we use; not because we don’t know their real definition, but because there is no real “definition” to them.

we need to let go of naming the colour ness et al
To suppose that there must be would be like supposing that whenever children play with a ball they play a game according to strict rules.

A natural language is used by people involved in situations in which they are pursuing certain goals. These extralinguistic goals, which need not themselves be precisely stated or statable, provide some of the cues which reduce the ambiguity of expressions as much as is necessary for the task at hand. 

idio jargon ness via curiosity matching.. ie: imagine if we just focused on listening to the itch-in-8b-souls.. first thing.. everyday.. and used that data to augment our interconnectedness.. we might just get to a more antifragile, healthy, thriving world.. the ecosystem we keep longing for.. what the world needs most is the energy of 8b alive people
fringe consciousness takes account of cues in the context, and probably some possible parsings and meanings.. all of which would have to be made explicit in the out put of a machine
machine can’t idiosyncratic jargon.. but could listen to us do it.. and use that data to connect us ie: tech w/o judgment.. as it could be
896 when occasionally ai enthusiasts admit the difficulties confronting present techniques, the appeal to learning is a fav panacea. papert , for ie, has recently claimed one cannot expect machines to perform like adults unless they are first taught, and that what is needed is a machine w the child’s ability to learn. this move, however, as we shall see, only evades the problem in area of language learning.. only successful program epam (elementary perceiver and memorizer).. simulates learning of the association of nonsense syllables.. simplified case of verbal learning… the interesting thing about nonsense syllable learning however is that it is not a case of language learning at all.. learning to associate nonsense syllables is in fact, acquiring something like a pavlovian conditioned reflex..
yeah.. more of how a machine could help w idiosyncratic jargon
p 22/910 ebbinghaus, at the end of 19th cent proposed this form of conditioning precisely to eliminate any use of meaningful grouping or appeal to a context of previously learned associations..
let’s us its ‘nonsense’ ness for natural/emergent/non-previous/everyday-anew.. connections.. ie: imagine if we
only successful case of cognitive simulation simulates a process which does not involve comprehension, and so is not genuinely cognitive
that’s god.. that’s what we need from tech.. ie: tech w/o judgment
What is involved in learning a language is much more complicated and more mysterious than the sort of conditioned reflex involved in learning to associate nonsense syllables. if child doesn’t already use language, how do we ever get off the ground.. wittgenstein suggests that the child must be engaged in a a ‘from of life’ in which he shares at least some of the goals/interests of the teacher, so that the activity at hand helps to delimit the possible reference of the words used
key – must be engaged in life.. not sea world
What, then, can be taught to a machine? This is precisely what is in question in one of the few serious objections to work in artificial intelligence made by one of the workers himself
to match us via idiosyncratic jargon.. self-talk as data.. et al..  not ‘learned’ .. programmed.. told what to do.. not any form of m\a\p
p 23/924 al samuel.. who wrote the celebrated checkers program, has argued that machines cannot be intelligent because they can only do what they are instructed to do
yeah that.. exactly.. and alive being die when you do that.. ie: telling people what to do
michael scriven argues that new strategies are ‘put into the computer by the designer in exactly the same metaphorical sense that we put into our children everything they come up w in their later life’
put into whales.. not alive/free children.. can’t put stuff in alive/free people.. just in dead machines/whales
Data are indeed put into a machine but in an entirely different way than children are taught
well.. naturally taught.. we do school much like you’d put into a machine.. so actually same way.. but whales.. alive children/people aren’t ‘taught’
can someone be a man’s teach in this? certainly
well.. a whale’s teachers.. but not an alive being..
there are also rules, but they do not form a system, and only experience people can apply them right.. unlike calculation rules
lit & num as colonialism et al
937 it is this ability to grasp the point in a particular context which is true learning; since children can and must make this leap, they can and do surprise us and come up w something genuinely new
not yet scrambled.. newness.. only surprising to whales
this trial and error search is another ie of a brute force technique like counting out in chess.. but just as in game playing the possibilities soon get out of hand. in problem solving one needs some systematic way to cut down the search maze so that one can spend one’s time exploring promising alts..
exploring alts isn’t deep enough.. what we need is curiosity over decision making et al
951 in fact certain details of newell and simon’s article ‘gps: a program that simulates human thought’ suggest that these further operation are not like the programmed operations at all
yeah.. organism as fractal is a horse of a different color
964 unable thus to eliminate the divergence and unwilling to try to understand its significance, newell and simon dismiss the discrepancy as ‘an ie of parallel processing’
so like voluntary compliance.. ie: assuming we are machine like..
1045 thus we are left w no computer theory of the fundamental first step in all problem solving: the making of the essential/inessential distinction
yeah.. to irrelevant s ness.. but then too.. not to interconnectedness.. big diff is what’s artificial/sea world ness
the human ability to distinguish the essential form the inessential in a specific task
rather.. of the (non) essentiality of the task .. of task ness.. itself.. taleb center of problem law et al
since game playing is a form of problem solving we should expect to find same process in chess playing and indeed we do.. to quote hearst: ‘de groot concluded from his study that differences in playing strength depend much less on calculating power than on ‘skill in problem conception’.. grandmasters seem to be superior to masters in isolating the most significant features of a position, rather than in the total number of moves they consider.. somewhat surprisingly de groot found that grandmasters do not examine more possibilities.. nor to they look further ahead.. the grandmaster is somehow able to ‘see the core of the problem immediately, whereas the experts or lesser player finds it w difficulty or misses it completely,even though he analyzes as many alts and looks as many moves ahead as the grandmaster’ 1075 but thus far no one has even tried to suggest how a machine could perform this selection operation or how it could be programmed to learn to perform it, since it is one of the conditions for learning from past experience..
rather.. that learning form history ness is already cancer ized..
this lack of progress is surprising only to those, like feigenbaum, who do not recognized the ability to distinguish the essential from the inessential as a human form of ‘info processing’.. necessary for learning and problem solving .. yet not amenable to the mechanical search techniques which may operate once this distinction has been made.. it is precisely this function of intelligence which resists further progress in the problem solving field
rather.. that the problem solving field is not the essential.. ie: not the essence of human being
it is an illusion, moreover, to thing that the planning problem can be solved in isolation; that essential/inessential operations are given like blocks and one need only sort them out.. it is easy to be hypnotized by oversimplified and ad hoc cases.. like the logic problem.. into thinking that some operations are essential or inessential in themselves. it then looks as if we can find them because they are already there, so that we simply have to discover a heuristic rule to sort them out.. but normally (and often even in logic) essential operations are not around to be found because they do not exist independently of the pragmatic context..
let go of planning and problem solving.. ie: undisturbed ecosystem et al
1090 their work w gps, on the contrary, demos that all searching, unless directed by a prelim structuring of the problem is merely muddling thru
let’s try as infra as structuring of the problem deep enough
1119 it seem clear that we do not need to fully normalize and smooth out the pattern, since we can perceive the pattern as skewed, incomplete, large or small, and so on , at the same time we recognize it
better still.. let go of trying to perceive a pattern.. of trying to recognize..
distorted patterns are recognized not as falling under some looser and more ingenious sets of traits, but as exhibiting the sam simple traits as the undistorted figures, along w certain accidental additions of omissions.. similarly , noise is not tested and excluded; it is ignored as inessential.. here again, we note the human ability to distinguish the essential form the inessential..
loaded
133 earl hunt makes the same assumption in his review of patter recognition work: ‘pater recognition, like concept learning, involves the learning of a classification rule’..
whalespeak
one is led to look for a sort of perceptual heuristic, the ‘powerful operators’ which no one as yet has been able to find
rather.. that most can no longer hear.. (already in each one of us)
many traits crucial to discrimination are never taken up explicitly at all but do their work while remaining on the fringe of consciousness
sounds like undisturbed ecosystem ness .. if that..
1148 whereas in chess we begin w global sense of situation and have recourse to counting out only in the last anal, in perception we need never appeal to any explicit traits.. we normally recognize an object as similar to others objects w/o being aware of it as an ie of a type or as a member of a class defined in terms of specific traits..
label(s) ness as the death of us ness
as aron gurwitsch puts it in his anal of the diff between perceptual and conceptual consciousness: ‘perceive objects appear to us w generic determinations.. but .. and this is the decisive point.. to perceive an object of a certain kind o snot at all the sam thing as grasping that object as representative or as a particular case of a type.. 1164 this shift from perceptual to conceptual consciousness (from the perceptive to the mathematical frame of mind, to us pascal’s expression) is not necessarily an improvement
ha of math and men et al
evidently, in patter recognition, passing form implicit perceptual grouping to explicit conceptual classification.. is usually disadvantageous.. the fact that we need not conceptual is or thematize the traits common to several instances of the same pattern in order to recognize that pattern distinguishes human recognition form machine recognition which only occurs on the explicit conceptual level of class membership
loaded
in the cases thus far considered, the traits defining a member of a class, while generally too numerous to be useful in practical recognition, could at least in principle always be made explicit. in some cases, however, such explicitation is not even possible.. to appreciate this point we must first get over the idea, shared by traditional philosophers and workers in ai alike, that pattern recognition can always be understood as a sort of classification 1191 doctors will use names of diseases w/o ever deciding which phenomena are to be taken as criteria and which as symptoms; and this need not be a deplorable lack of clarity
missing it ness.. meaning.. we’re totally missing the problem deep enough
1233 similarity is the ultimate notion in wittgenstein’s anal and it cannot be reduced.. as machine thinking would require.. to a list of disjunction of identical, determinate features..
p 40/1261 Summary. Human beings are able to recognize patterns under the following increasingly difficult conditions:

1. The pattern may be skewed, incomplete, deformed, and embedded in noise;

2. The traits required for recognition may be “so fine and so numerous” that, even if they could be formalized, a search through a branching list of such traits would soon become unmanageable as new patterns for discrimination were added;

3. The traits may depend upon external and internal context and are thus not amenable to context-free specification;

4. There may be no common traits but a “complicated network of overlapping similarities,” capable of assimilating ever new variations.

Any system which can equal human performance, must therefore, be able to

1. Distinguish the essential from the inessential features of a particular instance of a pattern;

2. Use cues which remain on the fringes of consciousness;

3. Take account of the context;

4. Perceive the individual as typical, i.e., situate the individual with respect to a paradigm case.

p 41/1279 moreover, it is generally acknowledged that further progress in game playing, language translation, and problem solving awaits a breakthru in pattern recognition research
rather.. what we need is a letting go of all of it.. of any form of m\a\p
conclusion The basic problem facing workers attempting to use computers in the simulation of human intelligent behavior should now be clear: all alternatives must be made explicit. 1287 Instead of these triumphs, an overall pattern has emerged: success with simple mechanical forms of information processing, great expectations, and then failure when confronted with more complicated forms of behavior.

ch2

Phase II (1962-1967) Semantic Information Processing
1302 this became transformed into the goal of discovering what we might call minimal, self organizing systems.. a paradigm of this approach is to find large collections of generally similar components that, when arranged in a very *weakly specified structure and placed in an **appropriate environ would eventually come to behave in an ‘adaptive’ fashion
sounds like using *as infra (emphasis on the specifications as weakly.. as in .. free of them) and **hari rat park law to get back/to an undisturbed ecosystem.. sans behave/adaptive ness.. sans any form of m\a\p
p 44/1334 we shall also find again that only an unquestioned underlying faith enables workers such as minksy to find this situation encouraging
marvin minsky voluntary compliance ness
ANALYSIS OF A PROGRAM WHICH “UNDERSTANDS ENGLISH” BOBROW’S STUDENT Of the five semantic information processing programs collected in Minsky’s book, Daniel Bobrow’s STUDENT a program for solving algebra word problems is put forward as the most successful. It is, Minsky tells us, “a demonstration par excellence of the power of using meaning to solve linguistic problems.” 7 Indeed, Minsky devotes a great deal of his Scientific American article to Bobrow’s program and goes so far as to say that “it understands English.”
oi lit & num as colonialism et al
1344 we know a good type of data structure in which to store info needed to answer questions in this context, namely, algebraic equations
oi.. of math and men
it is importan to note that the problem was chosen because the restricted context made it easier
dead = easier
the program simply breaks up the sentences of the story problem into units of the basis of cues such as the words ‘times.. of.. equals.. ‘  etc; equates these sentence chunks w x’s and y’s’ and tries to set up simultaneous equations.. if these equations can’t be solved, ti appeals to further rules for breaking up the sentences into other units and tries again.. the whole scheme works only because there is the constraint, not present in understanding ordinary discourse, that the pieces of the sentence, when rep’d by variables, will set up soluble equations. as minsky puts it: ‘..some possibly syntactic ambiguities in the input are decided on the overall basis of algebraic consistency p 46/1361 Choosing algebra problems also has another advantage: In natural language, the ambiguities arise not only from the variety of structural groupings the words could be given, but also from the variety of meanings that can be assigned to each individual word. In STUDENT the strong semantic constraint (that the sentences express algebraic relations between the designated entities) keeps the situation more or less under control.
lit & num as colonialism/ language as control/enclosure.. et al
for such a ‘strong constraint’ eliminated just that aspect of natural language, namely its ambiguity, which makes machine processing of natural language difficult, if not impossible..
that’s why it’s important to note what we are using tech for.. ie: not understanding/thinking/intelligence et al.. but rather taking crazy/natural/ambiguous/unlimited data in .. and using it non judgmentally.. to augment our interconnectedness.. imagine if we
What, then, has been achieved? bobrow is rather cautious. although his thesis is somewhat misleadingly entitle ‘natural language input for a computer problem solving program’ bobrow makes clear form the outset that the program ‘accepts as input a comfortable but restrict subset of english
what we need to input is ie: idiosyncratic jargon.. where the program is seen as the subset (augment) of interconnected communication
1377 this is straightforward enough, and seems an admirable attempt to claim no more than is justified by the restricted choice of material. in the course of the work, bobrow even makes clear that ‘the student program considers words as symbols, and makes do w as little knowledge about the meaning of words as is compatible w the *goal of finding a solution of the particular problem
sounds like tech w/o judgment.. with the *goal of finding a match/connection via a particular daily curiosity ie: as infra ness
‘the semantic mode in the student system is based on one relationship (*equality) and **five basic arithmetic functions‘ – bobrow
imagine if we try *equity (everyone getting a go everyday).. and just **two humane/alive functions..
bobrow: for purposes of this report i have adopted the following operation defn of ‘understanding’ a computer understand a subset of english if it accepts input sentences which are members of this subset and answers questions based on info contained in the input.. the student system understand english in this sense..
let’s just go w idiosyncratic jargon.. words/no words.. et al.. over sentences.. toward discrimination as equity et al not even mentioning here.. all the control issues involved with basing this all on ‘english’ ness
1393 however, one can’t help being misled into feeling that if bobrow uses ‘understand’ rather than ‘processes’ it must be because his program has something to do w human understanding.. minksy exploits this ambiguity in his rhetorical article simply by dropping the quotation marks.. minsky makes even more surprising and misleading claims concerning the ‘enormous ‘learning potential” of bobrow’s program: ‘consider the qualitative effect, upon the subsequent performance of bobrow’s student, of telling it that ‘distance equals speed times time.. that one experience alone enables it to handle a large new portion of ‘high school algebra’; the physical position-velocity-time problems.. it is important not to fall into the habit.. of concentrating only on the kind of ‘learning’ that appears as *slow-improvement-attendant-upon-sickeningly-often-repeated-experience! bobrow’s program does not have any cautious statistical devices that have to be told something over and over gain, so its learning is too brilliant to be called so’
*huge ie of .. whalespeak perpetuating sea world
1399 that is, the program can no plug one distance, on rate, and one time into the equation d=rt; but that it does not understand anything is clear form the fact that it cannot use this equation twice in one problem, for it has no way of determining which quantities should be used in which equation.. as bobrow admits: ‘the same phrase must always be used to rep the same variable in a problem’.. no learning has occurred.. once he has removed the quotation marks from ‘understand’ and interpreted the quotation marks around ‘learning’ to mean superhuman learning.. minksy is free to engage in the usual riot of speculation
like describing all of us whales.. so a lot of t his book is: what human beings can’t do as whales.. or what whales can’t do.. we haven’t made computers like humans.. we have made humans like computers.. like machines.. we still don’t have a human use of human beings (either did/does that book)
p 48/1415 given a model of its own workings.. it (machine) could use its problem solving power to work on the problem of self improvement..
this isn’t the point of human being.. nor is it best/humane use of computers/tech
Once we have devised programs with a genuine capacity for self-improvement a rapid evolutionary process will begin. As the machine improves both itself and its model of itself, we shall begin to see all the phenomena associated with the terms “consciousness,” “intuition” and “intelligence” itself. It is hard to say how close we are to this threshold, but once it is crossed the world will not be the same.
nah.. that’s same song ness.. iterating whales.. not beings
It is not as hard to say how close we are to this threshold as Minsky would like us to believe.. we need only ask: to what extent can bobrow’s techniques be generalized and extended
this is not organism as fractal ness.. so not cven worth extending/generalizing for a machine function.. and beyond worth ness.. rather.. it becomes the death of us
1430 5 yrs have passed since bobrow made this claim and no more sophisticated semantic theory has been forthcoming.. Why Bobrow and Minsky think, in the face of the peculiar restrictions necessary to the function of the program, that such a generalization must be possible is hard to understand. Nothing, I think, can justify or even explain their optimism concerning this admittedly limited and ad hoc approach.. their general optimism that some such computable approach must work, however, can be seen to follow from a fundamental metaphysical assumption concerning the nature of language and of human intelligent behavior .. namely that whatever orderly behavior people engage in can in principle be formalized and processed by digital computers..
oi carhart-harris entropy law et al
this leads minsky and bobrow to shrug off all current difficulties as tech limitations, imposed, for ie, by the restricted size of the storage capacity of present machine memories..
missing it ness
1458 gps avoids looking at complex structures on a given level by decomposing them into smaller structures tied to subgoals.. when a substructure is handled at some deeper subgoal level it is ‘out of context’ in that the necessary info as to how the achievement of this subgoal contributes to the achievement of larger goals is lacking
blindness from naming the colour
the mechs we have sketched provides a pattern recognition device capable of taking a look at the problem which is ‘global’ yet has access to the full structure.. such ‘global’ guidance could be expected to save gps a large amount of the time now spent in setting up and pursuing subgoals that do not contribute to achieving goals at or near the top level.. this alone would be a worthwhile contribution
not if those (or any) goals aren’t the essence/point of human being ness.. ie: undisturbed ecosystem and taleb center of problem law
*certainly the study of these problems in the relatively well understood domain of phrase structure languages is *a natural next step toward the development of genuine ‘generalization learning’ by machines and a prereq to consideration of learning in still more complex descriptive language environs.. one interesting possibility.. apply entire phrase structure plus gps apparatus to improving its own set of transformation rules
*blah blah whalespeak.. suck ing our energy
1472 which, of course, raises the usual question: why do minksy and evans so confidently expect that the ad hoc techniques used to solve this specific and rather complex analogy problem can be generalized.. a hint as to the assumptions underlying this confidence can be fund in minsky’s surprising comparison of evans’ program to human analogy solving.. in spite of his disclaimers that ai is not interested in cog simulation, minksy give the following ‘mentalistic’ description of evans’ program
same song today.. spinning our wheels.. sucking energy.. on not deep enough ness
1488 for each answer figure, ‘weaken’ ie, generalize each relation just enough so that it will apply to the figure
ie of voluntary compliance and gray research law et al
by choosing that hypothesis which involved the least ‘weakening’ of the original
oi
the details of he selection rules.. .amount in effect to evans theory of human behavior in such situations.. i feel sure that something of this general character is involved in any kind of analogical reasoning.. this ‘something’ is put more clearly in minsky’s sci american article.. there hs ays: ‘i feel sure that rules or procedures of the same general character are involved in any kind of analogical reasoning’
oi
1503 it is true that if human beings did solve analogy problems in this way, there would be ever reason to expect to be able to improve and generalize evans’ program, since human beings certainly surpass the machines’ present level of performance.. but, as in the case of gps, there is no evidence that human being proceed in this way and descriptive, psychological evidence suggests that they do not.. what happens when a person is confronted w a figure such as figure 2?
why are we so obsessed w behavior ness.. not to mention behavior ness in sea world we gotta let go
1518 confronted now w a pairing of a and b, the human observer may have a rather rich and dazzling experience
ha
1532 this episode of perceptual problem solving has all the aspects of genuine thinking: the challenge, the productive confusion, the promising leads, the partial solutions, the disturbing contradictions, .. it is in a small way, an exhilarating experience, worthy of a creature endowed w reason; and when the solution has been found, there is a sense of distension, of pleasure , of rest
oh my.. maybe rest for whales.. like ‘rest’ felt when high et al
none of this is true for the computer, not because it is w/o consciousness, but because it proceeds in a fundamentally diff fashion.. we are shocked to learn that in order to make the machine solve the analogy problems the experimenter ‘had to develop what is certainly one of the most complex programs ever written’.. for us the problem is not hard; it is accessible to the brain of a young student.. the reason for the diff is that the task calls for handling of topological relations which require the neglect of purely metric ones.. the brain is geared to precisely such topographical feature because they inform us of the typical character of things, rather than their particular measurements.. as in the case of chess
oi.. saying/noticing some legit things.. but so loaded.. ie: chess; measurements; et al
1546 as arnheim puts it ‘topology was discovered by, and relies on, the perceptual powers of the brain, not the arithmetical ones’.. obviously minksy and evans think that analogies are solved by human beings by applying transformation rules, because the prospects for ai are only encouraging if this is how humans proceed.. but it is clearly circular to base one’s optimism on an hypothesis which, in turn, is only justified by the fact that if the hypothesis were true, one’s optimism would be justified
ie of the same song of perpetuating sea world
p 55/1578 Quillian proposes this program as “a reasonable view of how semantic information is organized within a person’s memory.” He gives no argument to show that it is reasonable except that if a computer were to store semantic information, this would be a reasonable model for it. People, indeed, are not aware of going through any of the complex storage and retrieval process Quillian outlines, but this does not disturb Quillian, who, like his teacher, Simon, in similar trouble can always claim that these processes are nonetheless unconsciously taking place: p 56 Quillian seems to have inherited Newell and Simon’s unquestioned assumption that human beings operate by heuristic programs. p 57/1593 But with a frankness, rare in the literature, Quillian also reports his disappointments: Unfortunately, two years of work on this problem led to the conclusion that the task is much too difficult to execute at our present stage of knowledge. The processing that goes on in a person’s head when he “understands” a sentence and incorporates its meaning into his memory is very large indeed, practically all of it being done without his conscious knowledge. 1607 These difficulties suggest that the model itself the idea that our understanding of a natural language involves building up a structured whole out of an enormous number of explicit parts may well be mistaken. Quillian’s work raises rather than resolves the question of storing the gigantic number of facts resulting from an analysis which has no place for perceptual gestalts. If this data structure grows too rapidly with the addition of new definitions, then Quillian’s work, far from being encouraging, would be a reductio ad absurdum of the whole computer oriented approach. p 58/1622 What would be reasonable to expect? Minsky estimates that Quillian’s program now contains a few hundred facts. He estimates that “a million facts would be necessary for great intelligence.”
oi
Minsky claims in another of his books, that within a generation . . . few compartments of intellect will remain outside the machine’s realm the problem of creating “artificial intelligence” will be substantially solved. Certainly there is nothing in Semantic Information Processing to justify this confidence. As we have seen, Minsky criticizes the early programs for their lack of generality. “Each program worked only on its restricted specialty, and there was no way to combine two different problem solvers.” But Minsky’s solutions are as ad hoc as ever. Yet he adds jauntily: The programs described in this volume may still have this character, but they are no longer ignoring the problem. In fact, their chief concern is finding methods of solving it. but there is no sign that any of the papers presented by minsky have solved anything.. they have not discovered any general feature of the human ability to behave intelligently.. all minksy presents are clever special solutions, like bobrow’s and evans’, or radically simplified models such as quillians’ which work because the real problem, the problem of how to structure and store he mass of data required has been put aside
or is it that you can’t even collect enough/right data for thinking.. one could however.. perhaps.. collect enough data for connecting (hlb that io dance).. but not for thinking – that embodiment/indigenous piece.. that a machine can’t have/be oi.. this thinking is fractal ing minksy/machines.. ie: makes little diff if non legit (aka: whales) data
1653 the restricted character o eh results reported by minksy, plus the fact that during the last 5 yrs none of the promised generalizations has been produced, suggests that human beings do not deal w a mass of isolated facts as does a digital computer, and thus do not have to store/retrieve these facts by heuristic rules.. judging from *their behavior, human beings avoid rather than resolve the difficulties confronting workers in cog simulation and ai by avoiding the discrete info processing techniques from which these difficulties arise.. thus it is by no means obvious that minsky’s progress toward handling ‘knowledge’ (slight as it is) is progress toward ai at all..
loaded ie: *whales.. not human beings
conclusion 1669 we still need to know the kind of heuristics we need to find heuristics, as well as what languages can readily describe them
ie: cure ios city via idiosyncratic jargon heuristics: enabling a person to discover or learn something for themselves: a “hands-on” or interactive heuristic approach to learning.. Computing proceeding to a solution by trial and error or by rules that are only loosely defined.
p 63/1699 why do those working in Artificial Intelligence assume that there must be a digital way of performing human tasks? To my knowledge, no one in the field seems to have asked himself these questions. In fact, artificial intelligence is the least self-critical field on the scientific scene. There must be a reason why these intelligent men almost unanimously mimimize or fail to recognize their difficulties, and continue dogmatically to assert their faith in progress. Some force in their assumptions, clearly not their success, must allow them to ignore the need for justification. We must now try to discover why, in the face of increasing difficulties, workers in artificial intelligence show such untroubled confidence.

part 2

ASSUMPTIONS UNDERLYING PERSISTENT OPTIMISM 1714
stopped taking notes here during first read thru – 1714 of 5570 in loc ..
underlying their optimism is the conviction that human info processing must proceed by discrete steps like those of a digital computer, and since nature has produced intelligent behavior w this form of processing, proper programming should be able to elicit such behavior from digital machines, either by imitating nature or by out programming her
the death of us ness
all ai works is done on digital computers because they are the only general purpose info processing devices which we know how to design or even conceive of at present.. all info w which these computers operate must be rep’d in terms of discrete elements.. in the case of present computers the info is rep’d by binary digits, that is in terms of a series of yeses and noes, of switches being open/closed.. the machine must operate on finite strings of these determinate elements as a series of objects related to each other only by rules.. thus the assumption that man functions like a gen purpose symbol manipulating device amounts to 1\ a bio assumption that brain operates in discrete ness of off/on  2\ a psych assumption that mind is bits of info that operate according to formal rules  3\ and epistemological assumption that all knowledge can be formalized.. 4\ all relevant info about the world.. must be analyzable as situation free determinate elements.. assumptions that there is a set of facts each logically independent of all the others
binary ness et al.. naming the colour ness.. opp of organism as fractal ness
1743 1\ bio assumption.. in period between invention of phone relay and digital computer.. the brain, always understood in terms of the latest tech inventions.. was understood as large phone switchboard.. or electronic computer.. still uncritically accepts by practically everyone not directly involved w work in neurophysiology, and underlies the naive assumption that man is a walking ie of a successful digital computer program..
as latest tech rather than organism as fractal
1758 even if brain did function like digi computer.. it would not provide encouragement or those working in cs or ai.. 1773 von neumann goes on to spell out what he takes to be the ‘mixed character of living organism’
organism as fractal.. otherwise spinning or wheels..
1800 if such time factors and field interactions play a crucial role, there is no reason to hop that the info processing on the neurophysiological level can be described in a digital formalism or, in deed, in an formalism at all.. in a system whose numerous elements interact so strongly w each other, the functioning of the system is not necessarily best understood by proceeding on a neuron by neuron basis as if each had an independent personality.. detailed comparison of the organization of computer systems and brains would prove equally frustrating and inconclusive.. thus.. brain as computer hypothesis has had its day.. no arguments as to the possibility of ai can be drawn from current empirical evidence concerning the brain.. the evidence is against the possibility of using digital computers to produce intelligence
not to mention the energy suck of having everything revolve around intellect ness.. we need augmenting interconnectedness much more than ie: augmenting human intellect or augmenting collective intelligence et al left off here.. 1815
1815 2\ the psych assumption
ch 3
1845 we have seen in ch 3 that..
so must be ch 4? not sure where i started seeing chapters..
1859 in his classic paper ‘the mathematical theory of communication’ shannon was perfectly clear that his theory, worked out for telephone engineering, carefully excludes as irrelevant the meaning of what is being transmitted.. the fundamental problem of communication is that of reproducing at one point either exactly or approx a message selected at another point. frequently the messages have meaning; that is they refer to or are correlated according to some system w certain physical or conceptual entities.. these semantic aspects of communication are irrelevant to the engineering problem
shaw communication law et al
the word info in this theory is used in a special sense that must not be confused w its ordinary usage. in particular, info must nto be confused w meaning.. in fact, tow messages, one heavily loaded w meaning an other pure nonsense, can be exactly equiv from the present viewpoint, as regards info.. it is this, that shannon means when he says ‘the semantic aspects of communication are irrelevant to the engineering aspects’ 1874 it isi precisely the role of the programmer to make the transition from statements which are meaningful to the strings of meaningless discrete bits.. w which computer operates.. the ambition of ai is ot program the computer to do this translating job itself.. but it is by not mean obvious that human translator can be dispense with
could be if we tried ie: tech w/o judgment approach.. not translating (so not judging/assuming/enclosing/excluding/limiting et al.. just listening).. just matching
1903 there is indeed not the slightest justification for the claim that ‘for each type of behavior in the repertoire of that organism, a putative answer to the question, how does one produce behavior of that type? takes the form of a set of specific instructions for producing the behavior by performing a set of machine operations’ – fodor quote but to say that the brain is necessarily going thru a series of operations when it takes the texture gradient is as absurd as to say that the planets are necessarily solving differential equations when they stay in their orbits around the sun, or that a slide rule (analogue computer) goes thru same steps when computing a sq root as does a digital computer when using the binary system to compute the same number 1917 the fact that w can describe the process of reaching equilib in terms of equations as and then break up these equations into discrete elements in order to solve them on a computer does not show that equilib is actually reached in discrete steps.. likewise, we need not conclude from the fact that all continuous physicochemical processes involved in human ‘info processing’ can in principle be formalized and calculated out discretely, that any discrete processes are actually taking place.. moreover, even if one could write such a computer program for simulating the physicochemical processes in the brain, it would be no help to psychology
not that any of that is even happening.. loaded statements
1931 newell, shaw, and simon have explicitly and systematically used the hierarchical structure of lists in their development of ‘info processing languages’ that are sued to program highspeed digital computers to simulate human thought processes.. their success in this direction.. which the present authors find most impressive and encouraging.. argues strongly for the hypothesis that a hierarchical structure is the basic form of org of human problem solving..
oi.. lists.. and red flags
as we have seen.. newell, shaw and simon conscientiously note the similarities and diff’s between human protocols and machine traces recorded *during the solution of the same problem
*that we’re solving/focusing-on same whale problem is part of the problem.. so.. irrelevant.. flapping.. sucking our energy.. we need to get out of sea world .. out of rat park..
newell and simon conclude that their work provides a general framework for understanding problem solving behavior and finally reveals w great clarity that free behavior of a reasonably intelligent human can be understood as the product of a complex but finite and determinate set of laws
am thinking.. legit free people wouldn’t be solving problems or following rules.. or be talking such ways as ie: ‘reasonably intelligent‘ et al
1961 this is a strangely unscientific conclusion to draw, for newell and simon acknowledge that their specific theories.. like any sci theories.. must stand or fall on the basis of their generality, that is, the range of phenomena which can be explained by the programs..
yeah.. deeper than unscientific.. non legit..
202 why is this growing body of evidence? have the gaps in the protocols been filled and exception explained? not at all.. the growing body of evidence seems to be the very programs whole lack of universality would cast doubt on the whole project but for the independent assumption of the digital hypothesis.. no independent empirical evidence exists for the psychological assumption.. in fact the same empirical evidence presented for the assumption that hte mind functions like a digital computer tends, when considered w/o making this assumption, to show that the assumption is empirically untenable.. 2034 for workers in the field, the psych assumption seems not to be an empirical hypothesis that can be supported or disconfirmed, but some sort of philosophical axiom whose truth is assured a priori
whalespeak/ loaded ness
no wonder psychologists such as newell, neisser, and miller find work in cog simulation encouraging.. in their view, if psych is to be possible at all an explanation must be expressible as a computer program.. this is not an empirical observation but follows from their defn of explanation.. divergences form eh protocol and failures can e ignored.. no matter how ambiguous the empirical result in cog simulation, they must be a first step toward a more adequate theory 2049 can one prejudge the results in psychology by insisting theories must be computer programs because otherwise psych isn’t possible? perhaps, psych as understood by the cog simulationists is a dead end
not just a dead end.. dead.. understood ness as naming the colour ness.. as blinding/killing/cancer ness
consider the behavior involved in selecting, on command, a red square from a multicolored array of geometrical figures
consider that these are our ie’s of behavior.. that we even have ie’s (aka: supposed to’s) of behavior.. that we’re obsessed/intoxicated w observing/defining.. of naming the colour/behavior.. so loaded
perhaps some very gen rules such as listen to the instructions, look toward the objects, consider the shapes, make your selection. but what about the detailed instructions for identifying a square rather than a circle..
spinach or rock ness.. alive beings need to listen to/for curiosity over decision making
and if all this does not seem strange enough
same too for any form of m\a\p.. any form of democratic admin
2063 still such a claim is the heir to a venerable traditions.. kant analyzed all experience/perception in terms of rules/instructions.. only such anal enables us to understand what is going on.. goes back to begin of philosophy.. that is to the tie when our concepts of understanding and reason were first formulated.. plato, who formulated this anal of understanding .. goes on to task whether the rules required to make behavior intelligible to the philosopher are necessarily followed by the person who exhibits the behavior.. that is are the rules only necessary if the philosopher is to understand what is going on, or are these rules necessarily followed by the person insofar as he is able to behave intelligently..
*let go
since plato thought that although people acted w/o necessarily being aware of any rules, their action did have a rational structure which could be explicated by the philosopher, and he asks whether the mathematician and the moral agent are implicitly following this program when behaving intelligently.. this is a decisive issue for the history of our concepts of understanding and explanation.. plato leaves no doubt about his view: any action which is sensible ie: nonarbitrary, has a rational structure which can be expressed in terms of some theory.. following this very theory as a set of rules.. for plato, these instructions are already in the mind.. preprogramed in a previous life, and can be made explicit by asking the subjects the appropriate questions.. given this.. one is bound to arrive at the cog simulations assumption that it is self evident that a complete description of behavior is a precise set of instructions for a digital computer and that these rules can actually be used to program computers to produce the behavior in question 2076 we have already traced the history of this assumption that thinking is calculating.. harks back to platonic realization that moral life would be more bearable and knowledge more definitive it it were true.. its plausibility however, rests only on a confusion between the mechanistic assumptions underlying the success of modern physical science and a correlative formalistic assumption underlying what would be a science of human behavior if such exited on one level, this a priori assumption makes sense.. man is an object 2090 the brain is clearly an energy transforming organ. it detects incoming signals; for ie, it detects changes in light intensity correlated w changed in texture gradient.. unfortunately for psychologists, this physical description.. is in no way a psychological explanation.. on this level one would not be justified in speaking of human agents, the mind, intentions, perceptions, memories, or even of colors or sounds, as psychologists want to do.. energy is being received and transformed and this is the whole story.. there is of course another level.. let us call it phenomenological.. on which it dos make sense to talk of human agents, acting, perceiving objects.. this level of description is no more satisfactory to a psychologist than the physiological level, since there is no awareness of the following instructions or rules; there is no place for a psychological explanation of the sort the cog simulationist demands.. faced w this conceptual squeeze, psychologists have always tired to find a third level on which they can do their work, a level which is psychological and yet offers an explanation of behavior 2103 if psych is to be a sci of human behavior.. it must study man as an object.. but not as a physical object, moving in response to inputs of physical energy, since that is the task of physics and neurophysiology.. the alt is to try to study human behavior as the response of some other sort of object to some other sort of input.. man must be treated as some device responding to discrete elements, according to laws
oi..  so loaded.. ie: another alt.. let go of naming the colour et al
until the advent of the computer the empiricist school had the edge because the intellectualist view never succeeded in treating man as a calculable object.. ie: installing little man (homunculurs) in the mind to guid its actions.. computers offer irresistible attraction of operating according to rules w/o appeal to a transcendental ego or homunculus.. 2117 moreover computer programs provide a model for the anal of behavior such as speaking a natural language which seems to be too complex.. in short.. there is now a device which can serve as a model for the mentalist view.. and it is inevitable that regardless of the validity.. psychologists dissatisfied w behaviorism will clutch at this high powered straw.. a computer is a physical object, but to describe its operation, one does not describe the vibrations of the electrons in its transistors.. but rather the levels of org of is on/off flip/flops.. if psych concepts can be given an interp in terms of the higher levels of org of thees rule governed flip/flops then psych will have found a language in which to explain human behavior.. the rewards are so tempting that the basic question, whether this third level between physics and phenomenology is a coherent level of discourse or not, is not even posed.. 2163 info is the concept which is supposed to rescue us from this confusion.. neisser says ‘info is what is transformed and the structured pattern of its transformation is what we want to understand’.. but as long as the notion of ‘stimulus input’ is ambiguous, it remains unclear what info is and how it is supposed to be related to the ‘stimulus input’ be it energy or direct perception
we need to let go of stimulus input ness.. that’s counter to ie: brown belonging law.. what we need is a means to listen deeper
2251 the need to believe in the information processing level if psychology is to be a science
oi
2266 conclusion so we again find ourselves moving in a vicious circle
as we will/are.. until we let go of any form of m\a\p
2282 it is impossible to process an indifferent ‘input’ w/o distinguishing between relevant an irrelevant, significant and insignificant data
and today.. all data is irrelevant.. non legit.. whalespeak
2295 these difficulties suggest that, although man is *surely a physical object processing physical inputs according to the laws of physics and chem, man’s behavior may not be explainable in terms of an info processing mech receiving and processing a set of discrete inputs
*surely loaded
2320 there is thus a subtle but important diff between the psych and the epistemological assumptions.. both assume the platonic notion of understanding as formalization, but those who make the psych assumption (those in cs) suppose that the rules used in the formalization of behavior are the every same rules which produce the behavior, while those who make the epistem assumption (those in ai) only affirm that all non arbitrary behavior can be formalized according to some rules, and that these rules, whatever they are can then be used by a computer to reproduce the behavior 2333 if this argument is convincing, the epistem assumption in the form in which it seems to support ai, turns out to be untenable, and correctly understood, argues against the possibility of ai, rather than guaranteeing its success.. claim that non arbitrary behavior can be formalized is not an axiom.. rather.. expresses a certain conception deeply rooted in our culture.. but may nonetheless turn out to be mistaken.. it should be lear.. that no empirical arguments from the success of ai are acceptable, since it is precisely the interp and above all the possibility of significant extension of the meager results such as bobrow’s which is in question 2347 minsky’s optimism.. that is his conviction that all non arbitrary behavior can be formalized and the resulting formalism used by a digital computer to reproduce that behavior – is a pure case of epistem assumption.. it is this belief which allows minksy to assert w confidence that ‘there is not reason to suppose that machines have any limitations not shared by man’ a digital computer is a machine which operates according to the sort of criteria plato once assumed could be used to understand any orderly behavior
that’s the problem.. aliveness is not orderly.. and not set patterned/behavior
this machine, as defined by minsky, who bases his defn on that of turning, is a ‘rule obeying mech’.. as turing puts it: ‘the computer is supposed to be following fixed rules.. it is the duty of the control to see that these instruction are obeyed correctly and in the right order.. ‘
see.. that’s a killer to alive ness.. so.. what computers can’t do?.. they can’t be alive.. which means they’re not even in the fractal sphere of human ness
the claim is made that this sort of machine.. a turing machine.. essence of digital computer.. can do anything that human being can do.. that it has, only those limitations share by man
oi.. alan turing
minsky considers the anti formalist counterclaim that ‘perhaps there are processes.. which simply cannot be described in any formal language.. but which can nevertheless be carried out.. by minds’.. turing does take up this sort of objection.. he states it as follows: ‘it is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances’.. turing’s ‘refutation’ is to make a distinction between ‘rules of conduct’ and ‘laws of behavior’ and then to assert that ‘we cannot so easily convince ourselves of the absence of complete laws of behavior as of complete rules of conduct’
oi – we need to let go.. of any form of m\a\p
in other words, while turing is ready to admit that it may in principle be impossible to provide a set of rules describing what person should do in every circumstance, he holds there is not reason to doubt that one could in principle discover a set of rules describing what he would do.. but why does this supposition seem so self evident that the burden of proof is on those who call it into question? why should we have to ‘convince ourselves of the absence of complete laws of behavior’ rather than of their presence? here we are fact to face again w the epsitem assumption.. it is important to try to *root out what lends this assumption its implied a priori plausibility
*that’s easy.. intoxicated whales
2375 in one sense human behavior is certainly lawful, if lawful simply means orderly
certainly? yeah .. that’s priori plausibility ness .. we have no idea
but the assumption that the laws in question are the sort that could be embodied in a computer program or some equiv formalism is a diff and much stronger claim, in need of further justification.. the idea that any description of behavior can be formalized in a way appropriate to computer programming leads worker in the field of ai to overlook this question
not to mention overlook the cancerous ness of observing/describing ‘behavior’.. once we start observing/describing/naming the colour.. it’s the death of us and the birth/perpetuation of whales
if is assume that in principle at least, human behavior can be rep’d by a set of independent propositions describing the inputs to the organism, correlated w a set of propositions describing its outputs.. the clearest statement of this assumption can be found in james culbertson’s move from the assertion that in theory at least one could build a robot using only flip/flops to the claim that it could therefore reproduce all human behavior
oi.. so loaded
2389 since (these complete robots) can, in principle, satisfy any give input/output specification, they can do any *prescribed things under any prescribed circumstances.. ingeniously solve problems, compose symphonies, create works of art and lit and engineering and pursue any goals..
yeah *prescription ness is a huge red flag.. it’s not human.. not alive.. let go
but as we have seen in ch 4, it is no t clear in the case of human being what these inputs and outputs are supposed to be
because they’re not supposed to be anything..
there is no reason to suppose and several reasons to doubt that human inputs/outputs can be isolated and their correlation formalized. culbertson’ assumption is an assumption and nothing more, and so in no way justifies his conclusions.. 2404 in general, by accepting the fundamental assumptions that the nervous system is part of the physical world and that all physical processes can be described in a math formalism which can in turn be manip’d by a digital computer, one can arrive at the strong claim that the behavior which results form human ‘info processing’ whether directly formalizable or not, can always be indirectly reproduced on a digiral machine
oi.. of math and men
this claim may well account for the formalist’s smugness, but what in fact is justified by the fundamental truth that every form of ‘info processing’.. must in principle be simulable on a digital computer? we have seen it does not prove the mentalist claim that, even when a human being is unaware of using discrete operation in processing info, he must nonetheless be unconsciously following set of instructions.. does it justify the epistem assumption that all non arbitrary behavior can be formalized? 2417 no, when minsky or turing claims that man can be understood as a turing machine, they must mean that a digital computer can reproduce human behavior, not be solving physical equations.. but by processing data received from the world, by means of logical operations that can be reduced to *matching, classifying and boolean operations.. as minsky puts it: mental processes resemble.. the kinds of processes found in computer progress: arbitrary symbol association, treelike storage schemes, conditional transfers, and the like..
imagine if we.. just do a full stop at *matching

huge

2446 if these calcs are correct, there is a special kind of impossibility involved in any attempt to simulate the brain as a physical system.. the enormous calcs necessary may be precluded by the very laws of physics and info theory such calc presuppose.. yet workers in the field of ai from turning to minksy seem to take refuge in this confusion between physical laws and info processing rules to convince themselves that there is reason to suppose that human behavior can be formalized; that the burden of proof is on those who claim that ‘there are processes.. which simply cannot be described in a formal langue but which can nevertheless be carried out ie.. by minds if not argument based on the success of physics is relevant to the success of ai, because ai is concerned w formalizing human behavior not physical motion, the only hope is to turn to areas of the behavioral sciences themselves.. galileo was able to found modern physics by abstracting from many of the properties and relations of aristotelian physics and finding that the math relations which remained were sufficient to describe the motion of objects.. what would be needed to justify the formalists’ optimism would be a galileo of the mind who by making he right abstractions could find a formalism which would be sufficient to describe human behavior
oi  – describing behavior.. naming the colour – is the death of us.. so this is all just irrelevan.. blah blah.. whalespeak
2461 chomsky and transformational linguists.. have found.. they can provide a formal theory of much of linguistic competence.. this success is a major source of encouragement for those in ai who are committed to the view that human behavior can be formalized w/o reduction the physical level 2475 but such a formaliztion only prociees jsutification for half th eepistem hypothese..
if that
linguistic competence is not what ai workers wish to formalize.. if machines are to communicate in natural language, their programs must not only incorp the rules of grammar; they must also contain rules of linguist performance. in other words what was omitted in order to be able to formulate linguistic theory.. the fact that *people are able to use their language – is just what must also be formalized
language as control/enclosure et al.. let’s let go.. and try idiosyncratic jargon sans rules/formalization/enclosure et al.. ie: *some people we need a means to undo our hierarchical listening
2490 this case takes us to the heart of a fundamental difficulty facing the simulators.. programmed behavior is either arbitrary or strictly rulelike..
deeper difficulty/problem.. programmed behavior is death
therefore, in confronting a new usage a machine must either treat it as a clear case falling udner the ruels or take a blind stab..
or let go
a native speaker feels he has a third alt.. he can recognize the usage as odd, not falling under the rules, and yet he can make sense of it.. give it a meaning in the context of human life in an apparently nonrulelike and yet nonarbitrary way 2504 no matter what order of metarules on chooses, it seems there will be a higher order of tacit understanding about how to break those rules and still be understood 2548 to get from the linguistic formalism to specific performance, one has to take into account the speaker’s understanding of his situation. if there could be an autonomous theory of performance, it would have to be an entirely new kind of theory, a theory for a local context which described this context entirely in universal yet non physical terms. neither physics nor linguistics offers any precedent for such a theory, nor any comforting assurance that such a theory can be found
let’s let go enough.. to try a nother way
2548 conclusion 2580 since we do manage to use language..
do we? all of us? ie: language as control/enclosure nothing to do w the argument he’s making.. about rules to explain how rules are applied et al
the computer is not in a situation.. it generates no local context. the computer theorist’s solution is to build the machine to respond to ultimate bits of context free, completely determinate data which require no further interp in order to be understood
huge.. let’s use tech w/o judgment
2594 this assumption that the world can be exhaustively analyzed in terms of determinate data or atomic facts is the deepest assumption underlying working in ai and the whole philosophical tradition. we hall call it the ontological assumption and now turn to analyzing its attraction and its difficulties 2609 now we turn to an even more fundamental difficulty facing those who hop to us digital computers to produce ai: the data w which the computer must operate if it is to perceive, speak, and in general behave intelligently, must be discrete, explicit, and determinate; otherwise, it will not be the sort of info which can be given to the computer so as to be processed by rule.. yet there is no reason to suppose that such data about the human world are available to the computer and several reasons to suggest that no such data exist
idiosyncratic jargon could work.. it we’d only let go enough to see..
2621 we shall soon see that this assumption lies at the basis of all thinking in ai, and that it can seem so self evident that it is never made explicit or questioned.. what in fact is only an hypothesis reflects 2000 yrs of philosophical tradition reinforced by a misinterp of the success of the physical sciences
black science of people/whales law et al
2666 granting for the moment that all human knowledge can be analyzed as a list of objects and of facts about each, minsky’s anal raises the problem of how such a large mass fo facts is to be stored/accessed.. how could one structure these data .. a 100 thousands discrete elements – so that one could find the info required in a reasonable amount of time? when one assumes that our knowledge of he world is knowledge of millions of discrete facts, the problem of ai becomes the problem of storing and accessing a large data base
so this is benjamin bratton deep address ness and vinay gupta label the object ness
2679 one is tempted to say: ‘it would be folly to base our intelligent machine upon some particular elab, thesaurus like classification of knowledge, some ad hoc synopticon.. surely that is no road to ‘general intelligence”.. and indeed little progress has been made toward solving the large data base problem.. but in spite of his own excellent objections.. minksy characteristically concludes: but we had better be cautious about this caution itself, for it exposes us to a far more deadly temptation: to seek a fountain of pure intelligence.. i see no reason to believe that intelligence can exist apart form a highly organized body of knowledge models, and processes
oi.. intellect ness et al.. need to let go of augmenting collective intelligence.. et al and focus on (org around) augmenting interconnectedness
the problem solving abilities of a highly intelligent person lies partly in his superior heuristics for managing his knowledge structure and partly in the structure itself; these are probably somewhat inseparable.. in any case, there is no reason to suppose that you can be intelligent except thru the use of an adequate, particular, knowledge or model structure.. but this is no argument for optimism
no argument for being.. ie: intellect ness is cancerous to being ness
it is by no means obvious that in order to be intelligent human being shave somehow solved or needed to solve the large data base problem.. the problem may itself be an artifact create by the fact that the computer must operate w discrete elements..
yeah.. that.. but even deeper
human knowledge does not seem to be analyzable into simple categories as minsky would like to believe.. a mistake, a collision, an embarrassing situation, etc, do not seem on the face of it to be objects or fact about objects.. even a chair is not understandable in terms of any set of facts or elements of knowledge.. to recognize chair need to understand its relation to other objects/humans.. these factors are no more isolable than is the chair.. they all may get their meaning in the context of human activity of which they form a part 2700 in assuming that what is given are facts at all, minksy is imply echoing a view which has been developing since plato and has now become so ingrained as to seem self evident.. as we have seen, the goal of the philosophical tradition embedded in our culture is to eliminate all risk: moral, intellectual, and practical.. indeed, the demand that knowledge be expressed in terms of rules or definitions which can be applied w/o the risk of interp is already present in plato, as is the belief in simple elements to which the rules apply..
safety addiction
2714 the empiricist tradition, too, is dominated by the idea of discrete elements of knowledge.. for hume, all experience is made up of impressions: isolable, determinate, atoms of experience.. intellectualist and empiricists schools converge in russell’s logical atomism, and the idea reaches its fullest expression wittgenstei’s tractatus, where the world is defined in terms of a set of atomic facts which can be expressed in logically independent propositions.. this is the purest formulation of the ontological assumption , and the necessary precondition of the possibility of ai.. given ie: digital computers’ true/false ness

thus both philosophy and tech finally posit what plato sought: a world in which the possibility of clarity, certainty and control is guaranteed; a world of data structure, decision theory , and automation

aka: sea world

2727 no sooner had this certainty finally been made fully explicit however, than philosophers began to call it into question.. merleau ponty calls the assumption ‘presumption of common sense’.. heidegger calls it ‘calculating thought’.. and views it as the goal of philosophy, inevitable culminating in tech.. thus, for heidegger, tech, w its insistence on the ‘thoroughgoing calculability of objects’ is the inevitable culmination of metaphysics.. the exclusive concern w beings (objects) and the concomitant exclusion of Being (very roughly our sense of human situation which determines what is to count as an object)..
maurice merleau-ponty.. martin heidegger..
2741 even if what gave impetus to the philosophical tradition was the demand that things be clear and simple so that we can understand and control them, if things are not so simple why persist in the optimism? *what lends plausibility to this dream? as we have already seen in another  connection, the myth is fostered by the success of modern physics.. here , at least to a first approx, the ontological assumption works..
*the cage/sea world.. the ongoing perpetuation of tragedy of the non common
2858 this suggests that unless there are some facts whose relevance and significance are invariant in all situations – and no on has come up w such facts – we will have to give the computer a way of recognizing situations; otherwise, it will not be able to disambiguate and thus it will be, in principle, unable to understand utterances in a natural language.. among workers in ai, only joseph weizenbaum seems to be aware of the problems, in his work on a program which would *allow people to converse w a computer in a natural language, weizenbaum has had to face the importance of the situation, and realizes that it cannot be treated simply as a set of facts. his remarks on the importance of global context are worth quoting at length: (on strangers meeting and still able to communicate)
*natural to them.. ie: idiosyncratic jargon.. and not really converse.. but rather supply data.. with potential upgrades/clarifications.. so that the computer/tech can use the data to augmenting our interconnectedness – (ai humanity needs) https://en.wikipedia.org/wiki/Joseph_Weizenbaum
2872 i call attention to the contextual matter.. to underline the thesis that, while a computer program that ‘understand’ natural language in the most general sense is for the present beyond our means, the granting of even a quite broad contextual framework allows us to construct practical langue recognition procedures.. thus, weizenbaum proposes to program a news of contexts in terms of a ‘contextual tree’..  that is.. an org’d collection of facts concerning a person’s knowledge, emotional attitudes, goals and so forth
don’t even need that.. just need matching.. so actually.. the less info (stored).. the better
2906 fortunately, there does seem to be something like an ultimate context, but as we shall see, this proves to be as unprogrammable as the regress it was intro’d to avoid.. we have seen that in order to id which facts are relevant for recognizing an academic or a conspiratorial situation, and to interpret these facts, on em must appeal to a broader context.. thus it is only in the broader context of social intercourse that we see we must normally take into account what people are wearing and what they are doing.. but not how many insects there are in the room or the cloud formations at noon or a minute later..
oi.. broader/deeper.. ie: maté basic needs
2920 ‘what has to be accepted, the given is – someone could say – forms of life’ – wittgenstein.. well then, why not make explicit the significant features of the human form of life from w/in it? indeed, this desu ex machina solution has been the implicit goal of philosophers for 2000 yrs, and it should be no surprise that nothing short of a formalization of the human form of life could give us ai (which is not to say that this is what gives us normal intelligence).. but how are we to proceed? everything we experience in some way, immediate or remote, reflects our human concerns.. w/o some particular inquiry to help us select and interp we are back confronting the infinity of meaningless facts we were trying to avoid
oi
on the other hands, we have the thesis: there must always be a broader context; otherwise we have no way to distinguish relevant/irrelevant facts.. on the other hand, we have the antithesis: there must a be an ultimate context, which *requires no interpretation; otherwise there will be an infinite regress of contexts, and we can never begin our formalization
yeah that.. *requires no interp.. ie: tech w/o judgment
human beings seem to embody a third possibility which would offer a way out of this dilemma.. instead of hierarchy of contexts, the present situation is recognized as a continuation or modification of the previous one.. thus we carry over from the immediate past a set of anticipations based on what was relevant and important a moment ago.. this carryover gives us certain predispositions as to what is worth noticing..
oi.. this is history ness and research ness.. any form of m\a\p.. and why we keep perpetuating sea worldtragedy of the non common. et al
2935 programming this alt, however, far from solving the problem of context recognition merely transforms a hierarchical regress into a temp one.. how does the situation which human begins carry along get stated? .. answer seems to be: human beings are simply wired genetically as babies to respond to certain features of the environ such as nipples and smiles which are crucially important for survival..  programming these initial reflexes and letting the computer learn might be a way out of the context recognition program; but it is important to note two reservations: no present work in al is devoted to this approach.. in fact.. ai as it is now defined.. seems to be the attempt to produce *fully formed adult intelligence.. moreover.. it leaves unexplained how the child develops from fixed responses elicited by fixed features of the environ.. to the determination of meaning in terms of context which even ai workers agree characterized the adult..
yeah.. that.. ai characterizes the *programmed whale.. not the legit free/not-yet-scrambled 5 yr old
2949 there seems to be no way to get into a situation and no way to recognize one from the outside
which is fine (and undeadly).. since we need none of that for tech as it could be
2962 there must be another alt, however, *since language is used and understood.. the only way out seems to be to deny the separation of fact and situation
there is.. even though *language isn’t understood/uesed by all.. shaw communication law
part 3 will show how this latter alt is possible and how it is related to the rest of human life.. only then will it become clear why the fixed feature alt is empirically untenable, and also why the human form of life cannot  be programmed..
conclusion 2977 there is no reason to deny the growing body of evidence that human and mechanical info processing proceed in entirely diff ways..
deeper.. human side.. not procedural.. machine side.. just info matching/org-in .. not processing
which leaves untouched the problem of how to formalize the totality of human knowledge presupposed in intelligent behavior.. this fundamental difficulty is hidden by the epistem and ontological assumptions that all human behavior must be analyzable in terms of rules relating atomic facts
or that we keep thinking that intellect ness is something.. and something worth sucking our energies.. we need to let go
2990 present difficulties in game playing, language translation, problem solving, and pattern recognition, however, indicate a limit to our ability to sub one kind of info processing for another
taleb center of problem law
3003 alts to the traditional assumptions intro 3019 there may be an alt way for understanding human reason
let go
such an alt view has many hurdles to overcome.. the greatest of these is that it cannot be presented as an alt scientific explanation.. we have seen that what counts as ‘a complete description’ or an explanation is determined by the very tradition to which we are seeking an alt
the ongoing perpetuation of sea world ness
3033 it is not some specific explanation, then, that has failed, but the whole conceptual framework which assumes that an explanation of human behavior can and must take the platonic form.. if this whole approach has failed, then in proposing an alt account we shall have to propose a diff sort of explanation .. a diff sort of answer to the question ‘how does man produce intelligent behavior’ or *even a diff sort of question, for the notion of ‘producing’ behavior instead of simply exhibiting it is already colored by the tradition.. for a product must be produced in some way; and if it isn’t produced in some definite way, the only alt seems to be that it is produced magically.. there is a kind of answer to this question which is not committed beforehand to finding the precise rulelike relations between precisely defined objects.. it takes the form of a phenomenological **description of the behavior involved..
oh my.. leg to (of **describing behavior et al) .. that’s not diff.. that’s not a legit alt.. (like you *just said was needed)
3060 taking this suggestion to heart, w shall explore three areas necessarily neglected in cs and ai but which seem to underlie all intelligent behavior
yeah see.. you’re doing just what you said in last bit/para not to do
3075 the role of the body in intelligent behavior
loaded..
from plato to descartes has  thought of the body as getting in the way of intelligence and reason.. rather than being in any ay indispensable for it.. if the body turns out to be indispensable for intelligent behavior, then we shall have to ask whether the body can be simulated on a heuristically programmed digital computer.. if not, then the project of ai is doomed from the start.. these are the questions to which we must now turn..
those aren’t the questions.. let go
3132 all is well as long as you are willing to have a fairly restricted universe of speakers or sounds or both.. w/in these limitations you can play some very good tricks..
yeah.. that’s what we’ve been doing forever.. that’s sea world et al
3175 merleau ponty points out that most of what we experience must remain in the background so that something can be perceived in the foreground 3215 philosophers have thought of man as a contemplative mind passively receiving data about the world and then ordering the elements.. physics has made his conception plausible on the level of the brain as a physical object..
not us – whalespeak
3259 this slide from gestalt anticipation to preset plan is an obfuscation necessitated by the computer model: a gestalt determines the meaning of the elements it organizes; a plan or a rule simply organizes independently defined elements. moreover, just as the elements (the beats) cannot be defined independently of the gestalt, the gestalt (the rhythm( is nothing but the org of the elements. a plan, on the other hand, can be stated as a rule or program, independently of the elements. . this diff is neglected in all cs models.. yet it is the essence of the gestaltist insight and accounts for the flexibility of human pattern recognition compared to that of machines
do legit free begins do that? pattern recognition? or do they just dance? ‘in undisturbed ecosystems ..the average individual, species, or population, left to its own devices, behaves in ways that serve and stabilize the whole..’ –Dana Meadows
3274 the gestaltists were ‘nativist’ believing that the perceptual processes were determined by necessary and innate principles rather than by learning
yeah more of/like that..
the perceived world aways took the ‘best’ the ‘structurally simplest’ form, because of the equilibrium principle that transcends any possible effects of learning or practice..
not yet scrambled ness et al
3290 thus, even if the digital model of the brain had existed at the time, the gestaltists would have rejected it.. neisser does not see this. he supposes that the digital model of built in rules which the linguist have been led to propose, is an improvement of the analogue model proposed by the gestaltists.. neisser’s praise of the linguists’ ‘improvement’ ignoring as it does the difficulties in ai, the latest developments in neurophysiology, and the reason the gestaltists proposed an analogue mode in the first place can only be a non sequitur: the gestalt psychologists were never able to provide any satisfactory description of anal of the structure involved in perception.. the few attempts to specify ‘fields of force’ in vision of ‘ionic equilibria’ in the brain, were ad hoc and ended in failure.. in linguistics, by contrast, the study of ‘syntactic structure’ has a long history.. how the long history of syntactic structures is supposed to show that the linguists have a better model of neural processes than the gestaltists is totally unclear.. it seems to mean that at least the rules the linguists are looking for would be, if they were found, the sort of rules one could process w a digital computer which we already understand, whereas the gestaltist equilib principles could only be simulated on a brain like analogue computer, which no one at present knows how to design 3318 to have an alt account of intelligent behavior we must describe the general and fundamental features of human activity. in the absence of a workable digital computer model, and leaving to the neurophysiologist the question of how the brain integrates incoming physical stimuli, we must again ask, how do human being use an underdetermined, wholistic expectation to organize their experience
rather.. do/would legit free humans org their experience?
3334 he (ponty) argues that it is the body which confers the meaning discovered by husserl.. after all, it is our body which captures a rhythm. we have a body set to respond to he sound pattern.. this body set is not a rule in the mind which can be formulated or entertained apart form the actual activity of anticipating the beats.. generally in acquiring a skill – in learning to drive, dance, or pronounce a foreign language, for ie,.. at first we must slowly , awkwardly, and consciously follow the rules.. but then there come sa comment when we finally transfer control to the body.. at this point we do not seem to be simply dropping these same rigid rules into unconsciousness; rather we seem to have picked up the muscular gestalt which gives our behavior a new flexibility and smoothness. the same holds for acquiring the skill of perception
yeah.. i think the slow/awkward ness is because following rules is not natural.. rather it’s a sea world manufactured consent/mandate
3349 seeing too, is a skill that has to be learned
whalespeak
my body enables me to by pass this formal anal 3363 a human perceive, like a machine, needs feedback to find out if he has successfully recognized an object
really? or whalespeak
3377 existential phenomenologists such as merleau ponty have related this ability to our active, *organically interconnected body, set to respond to its environ in terms of a continual sense of its own functions/goals.. since it turns out that pattern recognition is a bodily skill basic to all intelligent behavior, the question of whether ai is possible boils down to the question of whether there can be an artificial embodied agents
*not sure legit interconnected responds.. or senses functions/goals.. i think that’s whalespeak.. to formalized/artificial/cancerous for the dance to dance am thinking that ‘intelligent behavior’ is already artificial
3419 but merleau ponty admits that this ability seems ‘magical’ form the pov of science, so we should not be surprised to find that rather than have no explanation of what people are able to do, the computer scientist embraces the assumption that people are unconsciously running w incredible speed thru the enormous calculation which would be involved in programing a computer to perform a similar task. however implausible, this view gains persuasiveness from the absence of an alt account i now have a way of bringing two objects together in objective space w/o appealing to any principle except: ‘do that again’.. this is presumably the way skills are built up.. the important thing about skills is that, although science requires that the skilled performance be described according to rules, these rules need in no way be involved in producing the performance.. 3433 the same anal helps dissipate the mistaken assumptions underlying early optimism about language translation.. if human beings had to apply semantic and syntactic rules and to store/access an infinity of facts in order to understand a language, they would have as much trouble as machines.. the native speaker, however, is not aware of having generated multiple semantic ambiguities which he then resolved by appeal to facts any more than he is aware of having picked out complex patterns by their traits of of having gone thru the calculations necessary to describe the way he brings his hand to a certain point in objective space.. perhaps language too, is a skill acquired by innately guided thrashing around and is used in a nonrulelike way.. wittgenstein suggests this point when he notes “in general we don’t use language according to strict rules – it hasn’t been taught us by means of struct rules either’.. such a view is not behavioristic
and maybe more humanistic.. perhaps let’s try idiosyncratic jargon ness.. sans rules, learning, training et al
3448 for the ai researcher it seems to justify the assumption that intelligent behavior can be produced by passively receiving data and then running thru the calculations necessary to describe the objective competence..
yeah intell behavior can be produced.. as it is already not us.. it is passive.. whale ness..
3463 thanks to this fundamental ability an embodied agent can dwell in the world in such a way as to avoid the infinite task of formalizing everything..
or anything..
3476 oettinger: ‘if indeed we have an ability to use a global context w/o recourse to formalization.. then our optimistic discrete enumerative approach is doomed’.. the situation: orderly behavior w/o recourse to rules..
rather.. being ness sans orderliness/behaviorness/rules/formalization.. carhart-harris entropy law et al
3479 we shall now try to show not only that human behavior can be regular w/o being governed by formalizable rules, but, further, that it has to be, because a total system of rules whose application to all possible eventualities is determined in advance makes no sense.. 3550 whatever it is that enables human beings to zero in on the relevant facts w/o definitively excluding others which might become relevant is so hard to describe that it has only recently become a clearly focuses problem fro philosophers.. it has to do w the way man is at home in his world, has it comfortably wrapped around him, so to speak.. human beings are somehow already situated in such a way that *what they need in order to cope w things is distributed around them where they need it, not packed away like a trunk full of objects or even carefully indexed in a filing cabinet.. this system of relations which makes it possible to discover objets when they are needed is our home or our world..
yeah.. but we have no ie’s of that..because *this isn’t what being ness is about.. meaning.. we have no idea what our legit needs are.. ie: a nother way
3590 as wittgenstein says, ‘the aspects of things that are most important for us are hidden because of their simplicity and familiarity.. (on is unable to notice something because it is always before one’s eyes)
yeah.. i think our blindness has gone deeper than that..
the basic insight dominates the discussions that the situation is org’d from the start in terms of human needs and propensities which give the facts meaning, make the facts what they are, so that there is never a question of storing and sorting thru an enormous list of meaningless, isolated data..
again.. deeper.. ie: if we got back/to legit needs..
3604 since we create the field in terms of our interests, only possibly relevant fact can appear.. relevance is thus already built in
could be that way.. ie: imagine if we
3634 but in the physical world all predicates have the same priority. only the programmer’s sense of the situation determines the order in the decision tree
hints at art-ists and bot-ists ness
3663 but it is just because w know wha tit is to have to orient ourselves in a world in which we are not at home; or to follow rulelike operations like the heuristics for bidding in bridge; and how to model in our imagination events which have not yet taken place, that we know that we are not aware of doing this most of the time.. the claim that we are nonetheless carrying on such operation unconsciously is either an empirical claim, for which there is no evidence, or an a priori claim based on the very assumption we are calling into question.. when we are at home in the world, the meaningful objects embedded in their context of references among which we live are not a mode of the world stored in our mind or brain; they are the world itself
yeah that.. but legit that
3676 the whole i/o model makes no sense here.. this is no reason to suppose that the human world can by analyzed into independent elements and even if it we could, one would not know whether to consider these elements the input or the output of the human mind.. if this idea is hard to accept, it is because this phenomenological account stands in opposition to our cartesian tradition which things of the physical world as impinging on our mind which then org’s it according to its previous experience and innate ideas or rules.. 3690 minsky has elaborated this computer cartesianism into an attempt at philosophy.. he beings by giving a plausible description of what is int fact the role of imagination: if a creature can answer a question about a hypothetical experiment w/o actually performing it, the it has demo’d some knowledge about the world.. for his answer to the question must be an encoded description of the behavior (inside the creature) of some submachine or ‘model’ responding to an encoded description of the world situation described by the question.. minksy then, w/o explanation or justification, generalizes this plausible description of the function of imagination to all perception and knowledge: questions about things in the world are answered by making statements about the behavior of corresponding structures in one’s model of the world.. he is thus led to intro a formalized copy of external world; as if besides the objects which soict our action, we need an encyclopedia in which we can look up where we are and what we are doing: .. if all knowledge requires a model we, of course, need a model of ourselves.. for this self description to be complete we will need a description of our model of our model of ourselves, and so forth.. minksy thinks of this self referential regress as the source of philosophical confusions concerning ind, body, free will, and so on.. he does not realize that his insistence on models has intro’d the regress and that this difficulty is proof of the philosophical incoherence of his assumption that nothing is ever known directly but only in terms of models.. 3705 there seems to be no place for the physical universe or for our world of interrelated objects, but only for a library describing the universe and human world which, according to the theory, cannot exist..
marvin minsky
3779 the rule model only seems inevitable if one abstracts himself form the human situation as philosophers have been trying to do for 2000 yrs, and as computer experts must, given the context free character of info processing in digital machines.. the situation as aa function of human needs
yeah.. let’s legit go their.. ie: a nother way (has to be legit needs.. or just spinning our wheels)
3808 to understand this important diff which watanabe has noted but not explained, one must first abandon his way of posing the problem.. to speak of values already gives away the game.. for values are a product of the same philosophical tradition which has laid down the conceptual basis of ai..  for what watanabe misleadingly calls values belongs to the structure of the field of experience, not the objects in it.. it is only because our interests are not objects in our experience that they can play this fundamental role of organizing our experience into meaningful patterns or regions
huge.. but diff ‘interests’.. ie: maté basic needs
3833 heidegger is also the first to have called attention to the way philosophy has from its inception been dedicated to trying to turn the concerns in terms of which we live into objects which we could contemplate and control.. socrates was dedicated to trying to make his and other people’s commitments explicit so that they could  compared, evaluated, and justified..
socrates supposed to law..? martin heidegger
but it is a fundamental and stange characteristic of our lives that insofar as we turn our most personal concerns into objects, which we can study and choose, they no longer have a grip on us.. they no longer organize a filed of significant possibilities in terms of which we act but become jus tone more possibility we can choose or reject.. nietzsche ‘the great man is necessarily a skeptic.. freedom from any kind of conviction is part of the strength of his will’.. 3848 simon and reitman have seen that emotion and motivation play some role in intelligent behavior
need behavior to get intellected whales
3861 this is again a case of not being able to see what one would not know how to program
the little prince & see with your heart
on heidegger trying to account for organizing human experience in terms of basic human need to understand one’s being..  to understand this we require a more concrete phenomenological anal of human needs.. philosophical and psychologial tradition has tried to ignore the role of thees needs in intelligent behavior.. and the computer model has *reinforced this tendency
yeah.. we need to go (listen) way deeper to get to legit needs.. ie: a nother way *and reinforced the myth that being ness is about intellect ness
3871 on bodily needs (hunger, thirst) give sense of task at hands.. significant or not.. we must search to discover what allays our restlessness or discomfort..
yeah.. for that we need to get back/to non hierarchical listening
thus human being do not begin w a genetic table of needs or values which they reveal to themselves as they go along.. , nor, when they are authentic, do they arbitrarily adopt values which are imposed by their environ.. rather in discovering what they need they make more specific a general need which was there all along but was not determinate
loaded.. but on right track of needing to org around legit basic needs.. ie: as infra
3886 kierkegaard speaks of a change of sphere of existence as a leap.. on a conceptual level.. called a conceptual revolution..
yes that.. humanity needs a leap.. to get back/to simultaneous spontaneity .. simultaneous fittingness.. everyone in sync..
3900 the conceptual framework determines what counts as a fact.. thus during a revolution there are no facts to which scientists can appeal to decide which view is correct.. .. after a revolution scientists work in a diff world.. 3915 in suggesting an alt view, or more exactly, in analyzing the way science actually proceeds so as to provide the elements of al alt view
so.. not legit alt
kuhn focuses on the importance of a paradigm.. that is a specific accepted ie of sci practice, in guiding research..
oi.. guiding research ness.. keeping us in same song rather than legit alt mode
indeed.. the existence of a paradigm needs not even imply that any full set of rules exist.. w/o such paradigms scientists confront the world w the sam bewilderment which we have suggested would necessarily confront an ai researcher trying to formalize the human form of life: in the absence of a paradigm.. all of the facts that could possibly pertain to the development of a given science are likely to seem equally relevant.. w/o paradigm.. not even clear what would count as a fact.. since facts are produced in terms of a particular paradigm for interpreting experience.. 3930 thus .. finding a new paradigm is like kierkegaardian leap: just because it is a transition between incommensurables, the transition between competing paradigms cannot be made a step at a time, forced by logic an neutral experience.. like the gestalt switch, it must occur all at once (though not necessarily in an instant) or not at all
not part\ial.. for (blank)’s sake
3945

man’s nature is so malleable.. if computer paradigm becomes so strong.. and since machines cannot be like human beings, human being may become progressively like machines.. 

yeah that.. huge.. whales

our risk is not the advent of super intelligent computers, but of subintelligent human beings
rather.. of whales.. intellect ness is part of that whales ness..
conclusion this alt conception of man and his ability to behave intelligently is really an anal of the way man’s skillful bodily activity as he works to satisfy his needs generates the human world
generates sea world.. because in our current state (of not hearing ness et al).. we have no idea what our legit needs are
3974 yet is just because these *needs are never completely determined for the individual and for mankind a s whole that ethey ar ecapable of being made more determinate and human nature can be retroactively changed by individual and cultural revolutions..
*until now
4074 this theory suggests that the ultimate situation in which human beings find themselves depends on their purposes, which are in turn a function of their body and their needs, and that these needs are not fixed once and for all but are interpreted and made determinate by *acculturation and thus by changes in human self interp.. thus in the las anal we can understand why there are not facts w build in significance and no fixed human forms of life which one could ever hope to program
whalespeak.. we can org/program around 2 specific human needs.. but they have to be something 8b people already crave.. that deep..  the *interpretting ness is what’s killing us.. because it’s manufacturing consent .. not legit listening.. so not legit us/needs
this is not to say that children do not begin w certain fixed response.. but rather that these responses are outgrown or *overridden in the process of maturation.. thus no fixed responses remain in an adult human bin which are not under the control of the significance of the situation
yeah.. that’s the virus/cancer.. how sea world *overrides our not yet scrambled.. making us all whales
4087 this human world w its recognizable objects if org’d by human beings using their embodied capacities to satisfy their embodied needs.. there is no reason to suppose that world org’ in terms of these fundamental human capacities should be accessible by any other means
so why do we keep spinning our wheels on non legit needs et al.. let go
4133 a sense of the global situation is necessary to avoid storing an infinity of facts
yeah that.. fuller too much law et al
4188 when minsky and papert talk of finding ‘global feature’ they seem to mean finding certain isolable, and determinate, features of a patter (ie, certain angles of intersection of two lines) which allow the program to make reliable guesses about the whole.. this just introduces further heuristics and is not wholistic in any interesting sense
yeah that..
4223 thus even w these break throughs the computer could not exhibit the flexibility of a human being solving an open structured problem.. (area 4) but these techniques could help w complex formal problems such as strategy in games and long range planning in organizing means-ends anal
we don’t need that.. we just need tech to take in random data (idiosyncratic jargon) and use it to match local people.. everyday
4236 we have seen tat he present attempt to store all the facts about the environ in an internal model of the world runs up against the problem of how to store and access this very large, perhaps infinite amount of data.. this is sometime called the large data based problem.. minsky’s book, presents several ad hoc ways of trying to get around this problem, but so far none has proved to be generalizable..
let go.. not what we need
4251 it also would *require some way to distinguish essential from inessential facts. most fundamentally, it is of course limited by having to treat the real world, whether stored in the robot memory or read off a tv screen, as a set of facts; whereas human beings org the world **in terms of their interests so that facts need be made explicit only inso far as they are relevant
*tech w/o judgment wouldn’t need any of that **sounds like art-ists and bot-ists ness .. but we don’t really know our legit interests/needs..
4266 since digital machine shave symbol manipulating powers superior to those of humans, they should, so far as possible, take over the digital aspects of human ‘info processing’
but this isn’t the symbol manip we need.. we just ned help in hearing the itch-in-8b-souls everyday.. and in using that data to connect us.. we don’t need help problem solving.. even if we didi.. if we got that first part.. we’d have no/less/diff problems.. but problem solving wouldn’t be (isn’t) the essence of human being.. neither is info processing.. et al
leibniz already claimed that computer ‘could enhance the capabilities of the mind to a far greater extent than optical instruments strengthen the eyes’.. but microscopes and telescopes are useless w/o the selecting and interpreting eye itself.. thus a chess player who could call on a machine to count out alts once he had zeroed in on an interesting are would e a formidable opponent.. likewise, in problem solving, once the problem is structured and attack planned, a machine could take over to work out the details (as in the case of machine shop allocation or investment banking)
oi.. not what we need.. wasting us.. wasting tech
4279 indeed, the first successful use of computers to augment rather than replace human intelligence has recently been reported
cool to replace artificial w augment.. but also need to replace intellect w interconnectedness
because it makes the mathematician an essential factor in the quest to establish theorems.. this real time interplay between man and machine has been found to be an exciting and rewarding mode of operation..
not deep enough.. not an interplay.. computer is just augmenting.. as seen in how we keep on focusing on non legit needs/interests et al ie: theorems, investment banking, chess, et al
4293 instead of trying to make use of the special capacities of computers, however, workers in ai .. blinded by their early success and hypnotized by the assumption that thinking is a continuum.. will settle for nothing short of unaided intelligence
as we’re all still doing.. if we think we need any form of m\a\p
4307 depends on specific forms of human ‘info processing’ which are in turn based on the *human way of being in the world.. and this way of being in a situation turns out to be unprogrammable in principle suing presently conceivable techniques
yeah that.. but we have no idea what *this is
to avoid the fate of the alchemists, it is time we asked where we stand.. now,before we invest more time/money on the info processing level, we should ask whether the protocols of human subjects and the programs so far produced suggest that computer language is appropriate for analyzing human behavior
rather.. is analyzing human behavior appropriate.. am thinking it’s cancerous left off here 4307 from copy of book in internet archive.. [https://archive.org/stream/whatcomputerscan017504mbp/whatcomputerscan017504mbp_djvu.txt].. book has 3 parts and a conclusion
Part I. Ten Years of Research in Artificial Intelligence (1957-1967)
Part II. Assumptions Underlying Persistent Optimism
Part III. Alternatives to the Traditional Assumptions
CONCLUSION: The Scope and Limits of Artificial Reason
_________ __________ from Roger:
@rogerschank Cognitive computing is not cognitive at all » Banking Technology bankingtech.com/829352/cogniti…

People learn from conversation and Google can’t have one.

what computers can’t do .. ai.. algo .. ness.. ________ ___________ tech as it could be ____________ ____________ __________