monica anderson

monica anderson.png

find/follow Monica’s work:

monica anderson's site

from her about page above:

I had, for years, been aware of a few key minority ideas that had been largely ignored by the AI mainstream and started looking for synergies among them. In order not to get sidetracked by the majority views I temporarily stopped reading books and reports about AI. I settled into a cycle of days to weeks of thought and speculation alternating with multi-day sessions of experimental programming.

In late 2004 I accepted a position at Google, where I worked for two years in order to fill my coffers to enable further research. I learned a lot about how AI, if it were available, could improve Web search. Work on my own algorithms was suspended for the duration but I started reading books again and wrote a few whitepapers for internal distribution at Google. I discovered that several others had had similar ideas, individually, but nobody else seemed to have had all these ideas at once; nobody seemed to have noticed how well they fit together.

I am currently funding this project myself and have been doing that since 2001. At most, Syntience employed three paid researchers including myself plus several volunteers, but we had to cut down on salaries as our resources dwindled. Increased funding would allow me to again hire these and other researchers and would accelerate progress.

link twitter

AI Epistemologist and implementer. Founder and Principal Researcher at Syntience Inc. Co-founder Sens.AI. Ex-Googler.

los altos ca

link facebook


Monica is one that is written up in our business plan – in the possible evidences. Her goals/vision are very resonating. Bernd intro’d us to her.


Monica founded:

syntience site


sept 2014:

For this conference, Monica says, “Most of what I have to say is available in the top two PDFs listed at especially the Unfriendly AI Problem Solved paper. My messages are “We’ve been trying to solve the wrong problem for 60 years” and “All intelligences are fallible, so we can always pull the plug on them when they make their first mistake”.



feb 2017 – interview

View story at

Existing Deep Learning seems to have some problems getting deep enough.

My own position is “Deep Learning isn’t quite sufficient and we need to find the next level technology beyond that” and so I’m working on exactly that. I have called this work “Artificial Intuition”, “West Pole Style Deep Learning”, and “Dynamic Deep Learning”. None of them are good; I need a better name, and I think I’ll switch to “Organic Learning”.

in my kind of Deep Learning, the network starts out *empty and growsorganically” to hold the understanding it is accumulating by reading the corpus. Basically, discovery of new knowledge means creating new “neurons” and “synapses” (well, their software counterparts in a simulation).

sounds like hosting-life-bits via self-talk as data..  *idiosyncratic jargon as driver

We know we can’t teach calculus to first graders since the children haven’t learned enough basics of math.

?.. maybe can’t ‘teach’ calc.. but i’m thinking first grader could grok calc..

To people starting out in AI today I’d like to say “Science is overrated; learn to think Holistically, and study Epistemology.” Science has been the main paradigm for extending our knowledge. But the emergence of Holistic tools like Neural Networks and general AI means that the idea that Science is the only game in town will come to an end in the next decade. It had a good run after 1650 and in 1850–2012. Yes, Science still makes sense in most domains. But there are certain problems in certain domains where the Scientific approach never made any sense, and we are starting to see why, and change is on the way.

part 1

View story at

The main capability we can observe in the new systems is that they can “Perform Autonomous Reduction”. This is my term for the ability to look at low level data, such as sensory input from a camera, a microphone, or characters in a book, and discover the high level meaning behind the pixels, sounds, and characters.

this ability to perform Reduction is exactly what was missing in the “old” kind of A

video – dual process theory

understanding: fast; parallel; intuitive; subconscious; involuntary

reasoning: slow; step by step; logical; conscious; voluntary

ai research has been working on the wrong problem.. that’s why we’ve missed it (reasoning only thin layer on top)

reductionism – the use of models… reasoning requires models of this kind… unfortunately .. comprehensive world models are impossible.. the world changes behind your back

ai research has been done in limited domains and they only seem to work because we have reduced the problem before we start to something we can handle

incorrectly assuming that intelligence is based on reasoning..

9 min – take most difficult things that humans do and call that intelligence.. like playing chess.. solving and algo.. computers do all of these things already and we don’t consider that ai..

i’d like to say better defn for agi: the ability of a computer or other machine to perform those human activities that are normally thought not to require human intelligence..

10 min – doing what we do w/o thinking… has to be first step toward agi.. walking.. speaking.. language.. enjoy symphony

11 min –  model world: reductionist ai… model brain: neuroscience inspired ai… i propose 3rd…  model understanding: epistemological ai

epistemology: the theory of knowledge, especially with regard to its methods, validity, and scope. Epistemology is the investigation of what distinguishes justified belief from opinion.

start from fundamental principle of epistemology.. what is learning what is knowledge.. how can we learn… reasoning actually

reduction is taking rich world and reducing to model… i’m talking about creating understanding machines

12 min – understanding requires model free methods… these provide learning, salience, reduction, abstraction, novelty, emergent robustness…  things you cannot make use in reductionist systems using logic..

model based: requires understanding; discards context; provably correct (why you should always go model based.. reductionist if you can.. just can’t with like agi); requires correct input data; brittle

model free: requires no understanding; exploits context; often fallible; operates even on scant evidence; robust

agi researchers .. need to create computer programs that jump to conclusions on scant evidence

understanding: intuition; everyday life; trivial problems in complex context

reasoning: logic; science; complicated problems in trivial contexts

19 min – all ai successes we’ve had over the years.. have been basically false promises.. unable to get into the real ai

science solves difficult problems in trivial (model) contexts using logical reasoning

humans solve trivial problems in complex (reality) contexts using intuitive understanding..

if we want to progress in ai we have to use artificial intuition

in my ignorance.. i don’t think we can get to artificial intuition.. so i like Maurice’s verbiage.. augmented intuition.. but i think he’s really part claiming artificial intuition..i don’t know..


why ai works – june 2017

In his book “Thinking Fast and Slow”, Daniel Kahneman discusses the idea that human minds use two different and complementary processes:


understanding: fast, parallel, intuitive, subconscious, expensive, model free

reasonging: slow, step by step, logical, conscious, efficient, model based

We have known for a long time that brains use these two modes. But the AI research community has been spending overmuch effort on the Reasoning part and has been ignoring the Understanding part for sixty years.

We had several good reasons for this. Until quite recently, our machines were too small to run any useful sized neural network. And also, we didn’t have a clue about how to implement this Understanding. But that is exactly what changed in 2012 when a group of AI researchers from Toronto effectively demonstrated that Deep Neural Networks could provide a simple kind of shallow and hollow proto-Understanding (well, they didn’t call it that, but I do).

the programmer cannot make the system Understand, it can only put in a hollow and fragile kind of Reasoning, as a program with many if-then cases.

And any misunderstandings the programmer has about the problem domain will become “bugs” in the computer program.


But today, for certain classes of moderately complex problems, we can use a DNN to automatically learn for itself how to Understand the problem.


understand or calculate..?

Which means we no longer need a programmer to Understand the problem.

We have delegated our Understanding to a machine.

And if you think about that for a minute you will see that that’s exactly what an AI should be doing. It should Understand all kinds of things so that we humans won’t have to.


there are two common situations where this will be a really good idea. One is when we have a problem we cannot Understand ourselves. We know a lot of those, starting with cellular biology.

The other common case will be when we Understand the problem well, but making a machine Understand it well enough to get the job done is cheaper and easier than any alternative.


The text on the right says “Woman in white dress standing with tennis racket two people in green behind her”. Which is not a bad description of the image. It could be used as the basis for a test for English skill level for adult education placement.

For all practical purposes, this is Understanding.


seems like rep/sub/calculate ness to me..

An image is, to a computer, a single long sequence of numbers denoting values for red, blue, and green colors in values from 0 to 255; it also knows how wide the image is. How does it get from this very low level representation to knowing that there is a woman with a tennis racket in the image?

i don’t think that is understanding

embodiment et al


Reasoning proceeds by breaking problems into subproblems and solving those, which is a “flowing downhill” kind of strategy. In mathematics we accept (and many mathematicians only accept this reluctantly) that we need to use induction to move “uphill” in abstractions. And that’s a very limited uphill movement at that. Epistemology allows for much stronger uphill moves. This is known as “jumping to conclusions on scant evidence” and it’s allowed in Epistemology based pre-scientific systems.

i don’t think a machine can do this – like a human can

but i do think a machine can facil our understandings.. by faciling our curiosities.. ie: hosting-life-bitsness (output ness) using idio-jargon/self-talk as data

a nother way

As an aside, here’s a pretty deep related thought: Nature/Evolution re-uses anything that works. I like to think that Understanding is a spandrel of Evolution itself. Neural Darwinism certainly straddles this gap. Could be coincidence, or the only answer that will work at all. More later.

i see this.. but not as understanding – like a human


We can now use these Deep Neural Networks as components in our systems to provide *Understanding of certain things like vision, speech, and other problems that require that we discover high level concepts in low level data. The technical (Epistemology level) name for this uphill flowing process is “Reduction” and we’ll be using that term later after we explain what it means.

*i would replace this with – algorithms/rules/equations.. but not understanding

but then again.. do we ever understand.. maybe the defn is weak on my end.. thinking shaw communication law


reductionism and holistic

everyone is solving most of their problems Holistically.




Maurice Conti