intro’d to his work in 2009/2010?.. then started following his brother.. Conrad.. and wolfram alpha
Computing a theory of everything
Stephen Wolfram, creator of Mathematica, talks about his quest to make all knowledge computational — able to be searched, processed and manipulated. His new search engine, Wolfram Alpha, has no lesser goal than to model and explain the physics underlying the universe.
fundamental irreducibility – you have to just watch it evolve…
had to create new science for this.. we’re used to predicting everything..
knowledge based computing..
with wolfram alpha inside mathematic – you can make precise programs that call on real world data, also give vague input and have alpha figure it out – democratizes coding..
computation is destined to be the defining idea of our future
fractals – how you get to true complexity
this post on the coming soon wolfram cloud..
reducing time between intention and action… making access equitable… and equity accessible..
Making the world computable is a much higher bar than being able to generate Wikipedia-style information … a very different thing. What we’ve tried to do is insanely more ambitious.
It changes the economics of building applications, because what used to take hours or days or weeks to do, can now take minutes. Currently, Wolfram meets many people who have an interesting idea or algorithm or application, but can’t complete it for lack of time or a team of developers or money. That could all change.
“It will spawn a whole mass of new startups,” Wolfram told me. “Now it becomes realistic for someone to build out a complete algorithm and automation system in a few hours.”
It also changes who can program, because instead of programs being tens of thousands of lines of code, they’re 20 or 200. And that means kids can code or novice programmers can get started — and build significant apps.
It’s not quite to artificial intelligence, but it might be coming. Maybe in a massively distributed form.
“Today, there are probably 10-50 billion computers in the world today, depending on how you define them, and lots of devices have computers in them,” he told me. “In the near future, almost everything will be made of computers — even small objects. At that point, computation becomes even more important that it is today, and things are adaptable and modifiable at all levels.”
so perhaps personal fabrication ness – as the day. dancing .. in the city. a nother way.
july 2015 interview on ai and the future:
reproducing what human brains choose to do may not be the right problem
Almost every X that’s been defined so far, machines have ended up being able to do, though the methods that they use to do it are usually utterly different from the ones that seem to be involved with humans.
on intelligence.. and human intelligence.. and definitions.. and words..
“How would we recognize abstract life that doesn’t happen to share the same history as all the particular kinds of life on Earth?” That’s a hard question.
and science of people ness.. ie: the manufacturedness of life on earth
That you can have a brain that we identify, okay, that’s an example of intelligence. You have a system that we don’t think of as being intelligent as such; it just does complicated computation. One of the questions is, “Is there a way to distinguish just doing complicated computation from being genuinely intelligent?”
there isn’t a distinction between the intelligent and the merely computational, so to speak. In fact, that observation is what got me launched on doing practical things like building Wolfram|Alpha, because I had thought for decades, “Wouldn’t it be great to have some general system that would take knowledge, make it computational, make it so that if there was a question that could in principle be answered on the basis of knowledge that our civilization has accumulated, we could, in practice, do it automatically.”
But I kind of thought the only way to get to that end result would be to build a sort of brain-like thing and have it work kind of the same—I didn’t know how—as humans brains work. And what I realized from the science that I did it was that just doesn’t make sense. That’s sort of a fool’s errand to try to do, because actually, it’s all just computation in the end, and you don’t have to go through this sort of intermediate route of building a human-like, brain-like thing in order to achieve computational knowledge, so to speak.
Recently, computers, and GPUs, and all that kind of thing became fast enough that, really—there are a bunch of engineering tricks that have been invented, and they’re very clever, and very nice, and very impressive, but fundamentally, the approach is 50 years old, of being able to just take one of these neural network–like systems, and just show it a whole bunch of examples and have it gradually learn distinctions between examples, and get to the point where it can, for example, recognize different kinds of objects and images
the fact is that the actual things we use in practice aren’t particularly neural-like. They’re basically just compositions of functions. You can think of them as just compositions of functions that have certain properties, and the one thing that they do have is an ability to incrementally adjust, that allows one to do some kind of incremental learning process. The fact that they get called neural networks is because it historically was inspired by how brains work, but there’s nothing really neurological about it. It’s just some kind of, essentially, composition of simple programs that just happens to have certain features that allow it to be taught by example, so to speak.
We can go through all kinds of different things about creativity, about language, about this and that and the other, and I think we can put a checkmark against essentially all of them at this point as, yes, that component is automatable.
I’ve been slowly trying to understand the consequences of that. It’s a little bit beyond what people usually think of as just AI, because AI is about replicating what individual human brains do rather than this thing that is more like replicating, in some more automated way, the knowledge of our civilization. So in a sense, AI is about reproducing level one, which is what individual brains can learn and do, rather than reproducing and automating level two, which is what the whole civilization knows about.
Danko Nikolic – 3 levels – with 4th needed to go to 3.
How can that be consistent with free will? Well, I think the point is that it’s a consequence of something I call “computational irreducibility.” It’s a consequence of the fact that even though you may know the rules by which something operates, when the thing actually runs and applies those rules many times, the result can be a sophisticated computation—in fact, a computation sufficiently sophisticated that you can’t predict its outcome much faster than just having the computation run itself and seeing what happens.
In other words, the way that any way of formalizing things works—in a sense the way that math works, but it isn’t really what people traditionally think of as math—the way that everything has to work, it’s the case that there are many systems where the computations they do are equivalently sophisticated, which means that you can’t jump ahead and see, “Oh! That thing isn’t free of its deterministic rules. I can tell what it’s going to do.” It seems to be robotic, in some sense.
I think the notion that you can expect to understand how the engineering works … that’s perhaps one of the things that people find disorienting about the current round of AI development is that “you can expect to understand how it works” is definitely coming to an end.
our engineering has mostly in the past been based on, “Let’s incrementally build things up, so that at every step we understand what’s going on.” And there’s nothing that is emergent about it. It’s all just built step by step, with understanding at every step. I think, in the future—and we can see this already in a lot of systems I’ve been building, too—that the humans are the ones who define the goal. The role of the technology is to automate getting to that goal, and how that goal is got to is really just the technology’s business. It’s not something where—yes, it can be academically intellectually interesting to understand, “Oh, how did the technology work to get me here?” But the objective is just to get there as automatically and efficiently as possible, without the constraint of necessarily having to understand how the thing worked inside, or having to be able to build it up incrementally, with understanding of each step.
in my own life, I’ve sort of alternated between doing basic science and doing technology development. And what’s typically happened is, the basic science tells me something about what kind of technology is worth developing, or can be developed, and then the technology development lets me actually be able to do more on the basic science side. And like right now, with all the things we’ve developed about computational knowledge, I think we’re again on the cusp of a real understanding of how symbolic language relates to traditional long-thought-about neural-net, more traditionally brain-like, AI activities—and now, what will the true end point of all of this be? I don’t really know. I’ve thought quite a bit about that. I’m not thrilled with some of the things that I’m understanding, in terms of where this goes and where it ends, because in a sense, my personal emotional goal structure is not necessarily aligned with what I can see scientifically and technologically as the path that we’re on.
“Okay, we succeeded. This is an AI.” Well, it may be able to do processing of information in ways that are completely as good, or vastly better than human brains and all that kind of thing. But the question is, what will that box do? And in the abstract, there’s no kind of goal defined for the box, because the only way we know to define goals—it doesn’t really make any sense to say the box has a goal, because goals are things that are really very much connected with a cultural history. Goals for us are a very human-defined thing, where individuals have goals, those goals depend on their history, the history of our civilization, things like that.
I think the way in which we are special, and it’s almost tautological, is that we are the only things that have the particular history that we have, so to speak. For example, things like the goals that we have, which are in no way, in my view, abstractly defined. Even within human society, people will often say, “Well, of course, everybody should want to learn more and be a better person,” or that kind of thing, but we know perfectly well that there’s no such uniformity. If you go to talk to somebody else, and that somebody will say, “Well, of course, everybody wants to be rich,” but you can talk to plenty of people who say, “I just don’t care. It doesn’t make any difference.” Goals are not, I think, an absolute thing. They’re arbitrarily defined, and defined by history, and individuals, and things like this. And I think when it comes to making an artificial intelligence, there is no intrinsic sense. You can give it whatever goals you want. As a human, you can say, “Okay, I want this artificial intelligence to go and trade stocks for me and make the maximum possible amount of money.” Or, “I want this thing to go and be a great companion to me and help me to be a better me,” whatever else you want to do. But, for it intrinsically, it does not have—merely as a consequence of being intelligent, that doesn’t give it some kind of goal. And I think the issue is, as you look to the future, and you say, “Well, what will the future humans …?” where there’s been much more automation that’s been achieved than in today’s world—and we’ve already got plenty of automation, but vastly more will be achieved. And many professions which right now require endless human effort, those will be basically completely automated, and at some point, whatever humans choose to do, the machines will successfully do for them. And then the question is, so then what happens? What do people intrinsically want to do? What will be the evolution of human purposes, human goals?
The wealthy are freed from having to do anything, and thus they can do anything they want or nothing at all.
but the wealthy aren’t freed – that’s why they’d play games or whatever… none of us are free if one of us is chained.. ness.. no?
i do agree with the need for – all of us – having the luxury to do whatever… ie: where our ongoing self-iteratig energy comes from. how the one world works, ie: how the gaming and the dancing and the art\ing fits together. so let’s do that first.
One of the funny possibilities is that, from the future, people will look to say, “Well, when humans were really humans, what were their goals? Those were the right goals.
I could imagine, in one scenario of the future, some number of years hence when a lot of the constraints that we have today have been removed, it’s like, “Let’s go back and look at those guys who were living at a time when the constraints hadn’t been removed, but where we have enough information to tell why they did what they did, and let’s microscopically reconstruct for those seven billion people what the choices that they made were, and let’s codify those into what we think is the right way to behave as a genuine non-artificial human, so to speak.” Perhaps this won’t be what actually happens, but I think this is one of the possible [outcomes].
As I say, I think the disembodied intelligence, the raw intelligence, of “Okay, I’ve got this computer, and it’s intelligent, and isn’t that nice,” I think that without some addition of goals and a more detailed history, I don’t think that ends up being [much]. It’s almost a null kind of thing to have produced. It’s almost generic. It’s almost like saying, “Well, I’ve made a piece of the universe again.” It’s too unspecific to be useful, so to speak
I guess I personally just live life the way I feel, so to speak, and separately understand that underneath, it’s sort of all just bits that are operating quite deterministically, and I can study those, and make use of those, and build technology based on those, and so on.
interviewed by Nikola jan 2011:
14 min – i think it not the case.. that the deeper we drill the more complicated.. ie: a simple program that exists in the universe that works for us.. i don’t know.. we don’t have a basis for knowing an answer to.. what we do know.. our universe is not as complicated as it could be.. we see a lot more order in the universe.. then the question is… so then… are the rules for the universe.. 4 lines long.. 1000 lines long..
16 min – what i’ve been interested in .. what are those rules like.. my hobby to explore what’s possible.. the main thing that has come out form that is a big effort.. you run into computational irreducibility.. seems impossible to jump ahead.. to not go thru all the steps
19 min – anyone who writes a book – new kind of science.. probably calls himself a new kind of scientist.. a lot of what i do.. science/tech/strategy.. my internal mindset: i like creating things and figuring things out .. i found the best vehicle for that is a fairly large r&d company
21 min – i find it invigorating to deliver these things to the world.. you feel good.. and you learn more
22 min – wolfram alpha: take world’s knowledge and make it computable.. can you take all stuff and put it in form that is ie: user friendly.. so humans get to say what they want to to and computer fills in rest… where if there was a human expert somewhere that can answer that….. our system can automatically give you expert level answer
24 min – what sort if info is absorbable/relevant to a human
25 min – i’m most interested in building large scale long term things.. alien artifacts.. things once built seem interesting .. but no one would think to build
27 min – many programs come from mining universe.. noticing what goes on in nature..
29 min – i think it funny – when we think about intelligence.. can we really define it.. ie: wolfram alpha.. is it ai or just doing computation….. when we think about creating artificial life.. what is defn of life that we can make it artificial
30 min – the reason we can talk about living, non-living is the historical defn.. there isn’t really an abstract defn of life.. and i think it’s true of intelligence…
31 min – on computational equivalence… implication.. that systems are equivalent in level of computation.. but it seems not so….
33 min – when we talk about intelligence.. seems we retreat to historical defn.. what ends up happening.. the thing that is ai.. gets defined via human ness… even though goes beyond what humans can do.. ie: wolfram is not very human life.. we can figure out in mili seconds… with algorithms bizarrely non human.. then we have to go back.. reverse engineer for human purposes… how could a human do this…
35 min – wolfram… we’re just trying to get a result.. we’re not trying to figure out how human would do it.. so .. if you ask.. are we creating something that is abstractly intelligence.. i don’t think there is something.. when we look at our own intelligence.. wound up with history.. the notion of a purely abstract intelligence.. rather defecated (?). …
38 min – future more diff than past than we’ve imagined.. we’ll be able to optimize everything.. what we’ve explored so far is tiny.. very simple/regular behaviors… in traditional engineering/technology…. nature goes beyond what we’ve done… even nature is only exploring tiny part… but we could go beyond all that..
40 min – in future … when you optimize for functionality… rather than creativity.. what we as humans do will start to be outsourced… operations less human understandable… matching human purpose.. so used.. impossibilities will become possibilities…
41 min – big challenge for future.. not what’s possible.. but what do you choose to do.. we can do anything.. what do we choose….. important problem for thinking about future.. understanding human purpose..
historical.. the things we think are worth doing now.. kind of arose from history… one of questions is… how do we figure out what is worth doing…
a story about people grokking what matters
43 min – bad view of history.. misconceptions: elaborate future thoughts… but when we look at it… what’s special about humanity vs rock (which has many complex happenings) … distinctions will be something about history.. not a question of the general computations.. but the histories..
45 min – in wolfram alpha.. trying to encapsulate what we’ve achieved in history as a civilization.. systematic knowledge..
47 min – all relates to purpose.. hard to be optimistic/pessimistic if don’t know what purpose is
49 min – a lot of what will happen – people will say – anything is possible.. so people will start asking.. so what shall we do..
50 min – interesting today.. for first time in history everything is precisely recorded…
52 min – if question is .. what can we expect in future.. and what can we do today as we think about that future.. we have a new framework for thinking about these things.. ahistorical.. previously.. even math has taken historical format.. math has been practiced progressively incrementally generalized… historically driven field.. the thing interesting about computational universe now.. we get to explore it ahistorically.. i don’t know the implications… the most basic basic science.. pre science..
science of people ness – a different vision/experiment…
54 min – best hope to understand what’s possible..
What Babbage imagined is that there could be a machine—a Difference Engine—that could be set up to compute any polynomial up to a certain degree using the method of differences, and then automatically step through values and print the results, taking humans and their propensity for errors entirely out of the loop.
distractions included .. insurance.. logs..
by 1832 a working prototype of a small Difference Engine (without a printer) had successfully been completed. And this is what Ada Lovelace saw in June 1833.
Ada’s encounter with the Difference Engine seems to be what ignited her interest in mathematics.
Ada taught some mathematics to the daughters of one of her mother’s friends. She continued by mail, noting that this could be “the commencement of ‘A Sentimental Mathematical Correspondence carried on for years between two ladies of rank’ to be hereafter published no doubt for the edification of mankind, or womankind”. It wasn’t sophisticated math, but what Ada said was clear, complete with admonitions like “You should never select an indirect proof, when a direct one can be given.” (There’s a lot of underlining, here shown as italics, in all Ada’s handwritten correspondence.)
Babbage seems at first to have underestimated Ada, …soon Babbage was opening up to her about many intellectual topics, as well as about the trouble he was having with the government over funding of the Difference Engine.
Ada was more sensitive than some to the bad notations of calculus (“why can’t one multiply by dx?”, etc.).
Ada’s relationship with her mother was a complex one. Outwardly, Ada treated her mother with great respect. But in many ways she seems to have found her controlling and manipulative. ….by February 6, 1841, Ada was feeling good enough about herself and her mathematics to write a very open letter to her mother about her thoughts and aspirations.
She wrote: “I believe myself to possess a most singular combination of qualities exactly fitted to make me pre-eminently a discoverer of the hidden realities of nature.” She talked of her ambition to do great things. She talked of her “insatiable & restless energy” which she believed she finally had found a purpose for. And she talked about how after 25 years she had become less “secretive & suspicious” with respect to her mother.
Babbage’s book is quite hard to read, opening for example with, “The notions we acquire of contrivance and design arise from comparing our observations on the works of other beings with the intentions of which we are conscious in our own undertakings.”
In apparent resonance with some of my own work 150 years later, he talks about the relationship between mechanical processes, natural laws and free will. He makes statements like “computations of great complexity can be effected by mechanical means”, but then goes on to claim (with rather weak examples) that a mechanical engine can produce sequences of numbers that show unexpected changes that are like miracles.
what had happened with the Difference Engine. Babbage had hired one of the leading engineers of his day to actually build the engine. But somehow, after a decade of work—and despite lots of precision machine tool development—the actual engine wasn’t done. ..his engineer quit, and insisted that he got to keep all the plans for the Difference Engine, even the ones that Babbage himself had drawn.
But right around this time, Babbage decided he’d had a better idea anyway. Instead of making a machine that would just compute differences, he imagined an “Analytical Engine” that supported a whole list of possible kinds of operations, that could in effect be done in an arbitrarily programmed sequence. ..most important, he figured out how to control the steps in a computation using punched cards of the kind that had been invented in 1801 by Jacquard for specifying patterns of weaving on looms
In October 1842, Menabrea published a paper in French based on his notes. When Ada saw the paper, she decided to translate it into English and submit it to a British publication.
Over the months that followed she worked very hard—often exchanging letters almost daily with Babbage..in those days letters were sent by post (which did come 6 times a day in London at the time) or carried by a servant (Ada lived about a mile from Babbage when she was in London), they read a lot like emails about a project might today, apart from being in Victorian English. Ada asks Babbage questions; he responds; she figures things out; he comments on them. She was clearly in charge, but felt she was first and foremost explaining Babbage’s work, so wanted to check things with him—though she got annoyed when Babbage, for example, tried to make his own corrections to her manuscript.
It’s charming to read Ada’s letter as she works on debugging her computation of Bernoulli numbers: “My Dear Babbage. I am in much dismay at having got into so amazing a quagmire & botheration with these Numbers, that I cannot possibly get the thing done today. …. I am now going out on horseback. Tant mieux.”
Babbage wanted one more thing: he wanted to add an anonymous preface (written by him) that explained how the British government had failed to support the project. Ada thought it a bad idea.
She saw herself as being a successful expositor and interpreter of Babbage’s work, setting it in a broader conceptual framework that she hoped could be built on.
Your affairs have been, & are, deeply occupying both myself and Lord Lovelace…. And the result is that I have plans for you…” Then she proceeds to ask, “If I am to lay before you in the course of a year or two, explicit & honorable propositions for executing your engine … would there be any chance of allowing myself … to conduct the business for you; your own undivided energies being devoted to the execution of the work …”
In other words, she basically proposed to take on the role of CEO, with Babbage becoming CTO. …She wrote, “My own uncompromising principle is to endeavour to love truth & God before fame & glory …”, while “Yours is to love truth & God … but to love fame, glory, honours, yet more.”
If he does consent to what I propose, I shall probably be enabled to keep him out of much hot water; & to bring his engine to consummation, ……But on Babbage’s copy of Ada’s letter, he scribbled, “Saw AAL this morning and refused all conditions”.
..on August 18, Babbage wrote to Ada about bringing drawings and papers when he would next come to visit her. The next week, Ada wrote to Babbage that “We are quite delighted at your (somewhat unhoped for) proposal” …The next day, Ada responded to Babbage, “You are a brave man to give yourself wholly up to Fairy-Guidance!”, and Babbage signed off on his next letter as “Your faithful Slave”.
but then Ada gets sick.. cancer…. Opium no longer controlled her pain; she experimented with cannabis. By August 1852, she wrote, “I begin to understand Death; which is going on quietly & gradually every minute, & will never be a thing of one particular moment”. And on August 19, she asked Babbage’s friend Charles Dickens to visit and read her an account of death from one of his books.
Ada had made Babbage the executor of her will. And—much to her mother’s chagrin—she had herself buried in the Byron family vault next to her father, who, like her, died at age 36 (Ada lived254 days longer).
Ada’s funeral was small; neither her mother nor Babbage attended.
..after Babbage died, his life work on his engines was all but forgotten …when programming began to be understood in the 1940s, Babbage’s work—and Ada’s Notes—were rediscovered.
It was a certain Bertram Bowden—a British nuclear physicist who went into the computer industry and eventually became Minister of Science and Education—who “rediscovered” Ada.
As interest in Babbage and Ada increased, so did curiosity about whether the Difference Engine would actually have worked if it had been built from Babbage’s plans. A project was mounted, and in 1991, after a heroic effort, a complete Difference Engine was built (with the printer added in 2000), with only one correction in the plans being made. Amazingly, the machine worked. Building it cost about the same, inflation adjusted, as Babbage had requested from the British government back in 1823.
What about the Analytical Engine? So far, no real version of it has ever been built—or even fully simulated.
She then explains that the Difference Engine can compute values of any 6th degree polynomial—but the Analytical Engine is different, because it can perform any sequence of operations. Or, as she says: “The Analytical Engine is an embodying of the science of operations, constructed with peculiar reference to abstract number as the subject of those operations. The Difference Engine is the embodying of one particular and very limited set of operations…”
Charmingly, at least for me, considering the years I have spent working on Mathematica, she continues at a later point: “We may consider the engine as the material and mechanical representative of analysis, and that our actual working powers in this department of human study will be enabled more effectually than heretofore to keep pace with our theoretical knowledge of its principles and laws, through the complete control which the engine gives us over the executive manipulation of algebraical and numerical symbols.”
A little later, she explains that punched cards are how the Analytical Engine is controlled, and then makes the classic statement that “the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves”.
Ada then goes through how a sequence of specific kinds of computations would work on the Analytical Engine, … “cycles” and “cycles of cycles, etc”, now known as loops and nested loops, giving a mathematical notation for them:
she discusses the idea of using loops to reduce the number of cards needed, and the value of rearranging operations to optimize their execution on the Analytical Engine, ultimately showing that just 3 cards could do what might seem like it should require 330.
she states: “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. … Its province is to assist us in making available what we are already acquainted with.”
In other words—as I often point out—actually programming something inevitably lets one do more exploration of it.
Ada seems to have understood, though, that the “science of operations” implemented by the engine would not only apply to traditional mathematical operations. For example, she notes that if “the fundamental relations of pitched sounds in the science of harmony” were amenable to abstract operations, then the engine could use them to “compose elaborate and scientific pieces of music of any degree of complexity or extent”. Not a bad level of understanding for 1843.
What’s become the most famous part of what Ada wrote is the computation of Bernoulli numbers,…“I want to put in something about Bernoulli’s Numbers, … of how an implicit function may be worked out by the engine, without having been worked out by human head & hands first…. ”
Back in the 1600s, people spent their lives making tables of sums of powers of integers—in other words, tabulating values of for different m and n. But Jakob Bernoulli pointed out that all such sums can be expressed as polynomials in m, with the coefficients being related to what are now called Bernoulli numbers.
To compute Bernoulli numbers the way Ada wanted takes two nested loops of operations. With the Analytical Engine design that existed at the time, Ada had to basically unroll these loops. But in the end she successfully produced a description of how B8(which she called B7) could be computed
As it’s printed, there’s a bug in Ada’s execution trace on line 4: the fraction is upside down. But if you fix that, it’s easy to get a modern version of what Ada did
Curiously, even in our record-breaking computation of Bernoulli numbers a few years ago, we were basically using the same algorithm as Ada—though now there are slightly faster algorithms ..
The Analytical Engine and its construction were all Babbage’s work. So what did Ada add? Ada saw herself first and foremost as an expositor. Babbage had shown her lots of plans and examples of the Analytical Engine. She wanted to explain what the overall point was—as well as relate it, as she put it, to “large, general, & metaphysical views”.
To me, there’s little doubt about what happened: Ada had an idea of what the Analytical Engine should be capable of, and was asking Babbage questions about how it could be achieved. If my own experiences with hardware designers in modern times are anything to go by, the answers will often have been very detailed. Ada’s achievement was to distill from these details a clear exposition of the abstract operation of the machine—something which Babbage never did. (In his autobiography, he basically just refers to Ada’s Notes.)
I think the key was what he called his Mechanical Notation. He first wrote about it in 1826 under the title “On a Method of Expressing by Signs the Action of Machinery”.
It’s not quite clear what something like this means:
But it looks surprisingly like a modern Modelica representation—say in Wolfram SystemModeler. (One difference in modern times is that subsystems are represented much more hierarchically; another is that everything is now computable, so that actual behavior of the system can be simulated from the representation.)
computable because of hierarchy..?
I’m not sure why Babbage didn’t do more to explain his Mechanical Notation and his diagrams. Perhaps he was just bitter about peoples’ failure to appreciate it in 1826. Or perhaps he saw it as the secret that let him create his designs. And even though systems engineering has progressed a long way since Babbage’s time, there may yet be inspiration to be had from what Babbage did.
Babbage: energetic man who had many ideas,…thought of making mathematical tables by machine, .. inventing the Analytical Engine as a way to achieve his objective. He was good—even inspired—at the engineering details. He was bad at keeping a project on track.
Lovelace:intelligent woman who became friends with Babbage .. wrote an exposition of the Analytical Engine, and in doing so she developed a more abstract understanding of it than Babbage had—and got a glimpse of the incredibly powerful idea of universal computation.
.. it’s this idea of universal computation that for example makes software possible—and that launched the whole computer revolution in the 20th century.
Babbage’s Analytical Engine is the first explicit example we know of a machine that would have been capable of universal computation.
Babbage didn’t think of it in these terms, though. He just wanted a machine that was as effective as possible at producing mathematical tables. But in the effort to design this, he ended up with a universal computer.
When Ada wrote about Babbage’s machine, she wanted to explain what it did in the clearest way—and to do this she looked at the machine more abstractly, with the result that she ended up exploring and articulating something quite recognizable as the modern notion of universal computation.
..the idea of universal computation arose again, most clearly in the work of Alan Turing in 1936. Then when electronic computers were built in the 1940s, it was realized they too exhibited universal computation, and the connection was made with Turing’s work.
… wasn’t until the 1980s that universal computation became widely accepted as a robust notion. And by that time, something new was emerging—notably through work I was doing: that universal computation was not only something that’s possible, but that it’s actually common.
And what we now know (embodied for example in my Principle of Computational Equivalence) is that beyond a low threshold a very wide range of systems—even of very simple construction—are actually capable of universal computation.
A Difference Engine doesn’t get there. But as soon as one adds just a little more, one will have universal computation. So in retrospect, it’s not surprising that the Analytical Engine was capable of universal computation.
..I think one can fairly say that Ada Lovelace was the first person ever to glimpse with any clarity what has become a defining phenomenon of our technology and even our civilization: the notion of universal computation.
Today—in the Wolfram Language for example—we never store much in the way of mathematical tables; we just compute what we need when we need it. But in Babbage’s day—with the idea of a massive Analytical Engine—this way of doing things would have been unthinkable.
…she was drawn to abstract ways of thinking, not only in mathematics and science, but also in more metaphysical areas.
And she seems to have concluded that her greatest strength would be in bridging the scientific with the metaphysical—perhaps in what she called “poetical science”. It was likely a correct self perception. For that is in a sense exactly what she did in the Notes she wrote: she took Babbage’s detailed engineering, and made it more abstract and “metaphysical”—and in the process gave us a first glimpse of the idea of universal computation.
the challenge is to be enough of an Ada to grasp what’s there—or at least to find an Ada who does.
..But at least now I think I have an idea of what the original Ada born 200 years ago today was like: a fitting personality on the road to universal computation and the present and future achievements of computational thinking.
Original Tweet: https://twitter.com/stephen_wolfram/status/674984408873623552
a new kind of science (book – 2002) – found via it being taleb‘s one of three books for over
@nntalebAnswer to Qs:Books I’ve been savoring for >10y(#Lindy)
+ Pensose:The Road to Reality
+ Wolfram: NKS
+ Histoire de la Vie Privée
(reached my limit on overdrive recommends. but this one wasn’t even showing)
wikipedia on nks:
A New Kind of Science is a best-selling, controversial book by Stephen Wolfram, published in 2002. It contains an empirical and systematic study of computational systems such as cellular automata. Wolfram calls these systems simple programs and argues that the scientific philosophy and methods appropriate for the study of simple programs are relevant to other fields of science.
The thesis of A New Kind of Science (NKS) is twofold: that the nature of computation must be explored experimentally, and that the results of these experiments have great relevance to understanding the physical world. Since its nascent beginnings in the 1930s, computation has been primarily approached from two traditions: engineering, which seeks to build practical systems using computations; and mathematics, which seeks to prove theorems about computation. However, as recently as the 1970s, computing has been described as being at the crossroads of mathematical, engineering, and empirical traditions.
Wolfram introduces a third tradition that seeks to empirically investigate computation for its own sake: ..
..He argues that an entirely new method is needed to do so because traditional mathematics fails to meaningfully describe complex systems.
The basic subject of Wolfram’s “new kind of science” is the study of simple abstract rules—essentially, elementary computer programs. In almost any class of a computational system, one very quickly finds instances of great complexity among its simplest cases [after a time series of multiple iterative loops, applying the same simple set of rules on itself, similar to a self-reinforcing cycle using a set of rules].
The remarkable feature of simple programs is that a significant percentage of them are capable of producing great complexity.
interesting.. thinking of minsky‘s 100 very mathematical don’t make an einstein… and that we’re missing it.. because we don’t see the complexity.. yet.. future of 3 trillion smaller people.. sounds like what wolfram is suggesting here..
Simply enumerating all possible variations of almost any class of programs quickly leads one to examples that do unexpected and interesting things. This leads to the question: if the program is so simple, where does the complexity come from? In a sense, there is not enough room in the program’s definition to directly encode all the things the program can do. Therefore, simple programs can be seen as a minimal example of emergence. A logical deduction from this phenomenon is that if the details of the program’s rules have little direct relationship to its behavior, then it is very difficult to directly engineer a simple program to perform a specific behavior. An alternative approach is to try to engineer a simple overall computational framework, and then do a brute-force search through all of the possible components for the best match.
Another feature of simple programs is that, according to the book, making them more complicated seems to have little effect on their overall complexity. A New Kind of Science argues that this is evidence that simple programs are enough to capture the essence of almost any complex system.
in general Wolfram’s idea is that novel ideas and mechanisms can be discovered in the computational universe, where they can be represented in their simplest forms, and then other fields can choose among these discoveries for those they find relevant.
instead of reverse engineering our theories from observation, we can enumerate systems and then try to match them to the behaviors we observe. A major theme of NKS is investigating the structure of the possibility space. Wolfram argues that science is far too ad hoc, in part because the models used are too complicated and/or unnecessarily organized around the limited primitives of traditional mathematics. Wolfram advocates using models whose variations are enumerable and whose consequences are straightforward to compute and analyze.
Wolfram argues that one of his achievements is in providing a coherent system of ideas that justifies computation as an organizing principle of science. For instance, he argues that the concept of computational irreducibility (that some complex computations are not amenable to short-cuts and cannot be “reduced”), is ultimately the reason why computational models of nature must be considered in addition to traditional mathematical models. Likewise, his idea of intrinsic randomness generation—that natural systems can generate their own randomness, rather than using chaos theory or stochastic perturbations—implies that computational models do not need to include explicit randomness.
Possibly the most important among these is an explanation as to why we experience randomness and complexity: often, the systems we analyze are just as sophisticated as we are. Thus, complexity is not a special quality of systems, like for instance the concept of “heat,” but simply a label for all systems whose computations are sophisticated. Wolfram argues that understanding this makes possible the “normal science” of the NKS paradigm.
At the deepest level, Wolfram argues that like many of the most important scientific ideas, the Principle of Computational Equivalence allows science to be more general by pointing out new ways in which humans are not “special”; that is, it has been thought that the complexity of human intelligence makes us special, but the Principle asserts otherwise. In a sense, many of Wolfram’s ideas are based on understanding the scientific process—including the human mind—as operating within the same universe it studies, rather than being outside it.
Principle of computational equivalence
The principle states that systems found in the natural world can perform computations up to a maximal (“universal”) level of computational power. Most systems can attain this level. Systems, in principle, compute the same things as a computer. Computation is therefore simply a question of translating input and outputs from one system to another. Consequently, most systems are computationally equivalent. Proposed examples of such systems are the workings of the human brain and the evolution of weather systems.
any convos between minsky and wolfram..?
One common theme of examples and applications is demonstrating how little complexity it takes to achieve interesting behavior, and how the proper methodology can discover this behavior.
so.. perhaps we simplify the day.. in order to discover us.. ie: a nother way
Wolfram suggests that the theory of computational irreducibility may provide a resolution to the existence of free will in a nominally deterministic universe. He posits that the computational process in the brain of the being with free will is actually complex enough so that it cannot be captured in a simpler computation, due to the principle of computational irreducibility. Thus while the process is indeed deterministic, there is no better way to determine the being’s will than to essentially run the experiment and let the being exercise it.
A key tenet of NKS is that the simpler the system, the more likely a version of it will recur in a wide variety of more complicated contexts.
critics such as Ray Kurzweil have argued that it ignores the distinction between hardware and software; while two computers may be equivalent in power, it does not follow that any two programs they might run are also equivalent.
This has led him to the view (also considered in a 1981 paper by Richard Feynman) that nature is discrete rather than continuous. He suggests that space consists of a set of isolated points, like cells in a cellular automaton, and that even time flows in discrete steps.
Wolfram’s claim that natural selection is not the fundamental cause of complexity in biology has led science journalist Chris Lavers to state that Wolfram does not understand the theory of evolution.
mar 1 2016 – ai and future – convo w/ edge:
3 min – what kinds of things can have intelligence/purpose… us, and what else… and the answer.. systems of nature.. ie: the weather has a mind of it’s own – not as silly as it seems… all sorts of systems doing effective computations.. turns out.. there’s a very broad equivalence between systems… not so special.. not so overly intelligent…
5 min – so then what makes us different… the particulars of our history which gives us the notions of our purpose/goals… so what we have to think about.. is these goals… that’s what humans contribute…
6 min – spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world
9 min – when comes to describing more sophisticated things.. kinds of things people build big programs to do..we don’t have a good way to describe those things with human natural language.. but we can build languages that do describe that… one question i’ve been interested in is..what does the world look like when most people can write code..
11 min – interesting language point is that today we have computer languages..for the most part.. intended for computers only…not intended for humans to read and understand…intended to tell computers in detail what to do. then we have natural language.. intended for human-to-human communication. i’ve been trying to build this knowledge-based language..intended for communication between humans and machines in a way where humans can read it and machines can understand it.. where we’re incorporating a lot of existing knowledge of world into language in same way that in human natural language we are constantly incorporating knowledge of the world into the language.. because it helps us in communicating things. one branch that I’m interested in right now is what the world looks like when most people can read and write code.
15 min – lot of purposes we have today are generated by scarcity ..scarce resources.. scarce time ..eventually .. forms of scarcity will disappear. most dramatic discontinuity… when we achieve effective human immortality…lot of current human purposes have to do with.. i’m only going to live a certain time so i’d better get a bunch of things done… what does it look like when things can be executed automatically.. don’t have kinds of drivers for purpose we have today..what does it look like… 1\ do people look back to scarcity era.. which is now being recorded broadly.. so that in future.. could really study us..and say.. that’s what it’s like to be human 2\ play video games all the time..
20 min – lot of money spent on making – brain like neuro networks.. way turing’s work flowed in…but these neuro networks didn’t do too exciting thing.. terrible thing in ’60s to neural networks minsky/papert wrote a book .. perceptrons ..they basically proved perceptrons couldn’t do anything interesting which is correct..can only make linear distinctions between things. problem was.. people said..these guys have written proof ..therefore no neural networks can do anything interesting so let’s forget about them… that happened for a while.
25 min – my original belief ..in order to make a serious computational knowledge system..first have to build a brain-like thing.. then have to feed it knowledge..then you’ll have good computational knowledge system.
i realized, as result of a bunch of science i’d done, .. there isn’t this bright line between what is intelligent and what is merely computational. i had assumed some magic thing/mech that allows us to be vastly more capable than anything that is merely computational….that’s what led to wolfram alpha… it works, to be able to take a large collection of the knowledge that’s in world and automatically answer questions on the basis of it using what are essentially merely computational techniques.
27 min – on thinking in steps.. what i discovered .. an alt way to do engineering.. more analogous to biology, ie: go out in universe (mining).. infinite number of programs.. just look randomly.. what do they do… you’d think all simple.. but not the case.. even simple programs.. can already do very sophisticated things.. interesting in understanding nature.. but also tech… so mine tech rather than just go step by step..then .. can we connect that capability to a human goal… can we entrain things in nature into tech to do what we want to achieve…
30 min – on music creation site… automated creativity.. people go there and came away with ideas/inspiration… attribute of originality/creativity available in tech/comp world …it’s the same thing as saying, go out into the physical world and go find these beautiful places to photograph in the world..they exist already…it’s a question of us picking the one we care about to look at.
32 min – let’s try to make a language that panders not to computers but to humans.. but can be converted into what computers understand… can you encapsulate knowledge we’ve accumulated to communicate w/computers
34 min – so one piece of ai – computer take knowledge and answer questions.. wolfram alpha has achieved that sort..
35 min – 1\ object recognition/identification.. which has changed over last year.. 5000/10 000 picturable nouns.. training… impresses us because that’s about what we do
38 min – 2\ voice to text.. 3\ language translation
40 min – on getting to precise everyday language.. symbolic representation
42 min – been interested in this.. so look back and see a lot of people like leibniz in the late 1600s, a man called john wilkins…this period when there were these things that they called philosophical languages. the idea of a philosophical language would be essentially what I’m now trying to do—a symbolic representation of the world. one thing that I like is I look at the philosophical language of wilkins, and you can see, how he divided things that were important in the world.
44 min – modern turing test: being able to have conversational thought.. i was thinking number one application is customer service… which wasn’t high on my list… compared to turing.. our convo w/computer.. shows us a screen back… as opposed to interaction w/human… what most people want is this visual.. non-human form of communication turns out to be richer… than human…. ie: if we were all fast graphic artists… in most human convo.. left w/pure language…
49 min – a good turing test for me.. is.. when will i have a bot that can respond to mosts of my email… i’ve been collecting info (emails and keystrokes) on me for 25 yrs.. i should be able to train that..
53 min – what i do like is the distribution of tech – equality amongst tech.. reasonable fraction it’s reasonably flat..
55 min – today’s programming will be obsolete in not very long.. only small set of people need to know language.. computers do rest… no good reason for humans to write all that stuff… what’s important is going from what human wants to do to getting machine to do that.. there’s the equalization this is producing… from having automated all of the stack.. unlocks vast range of people to let them make computers do things for them…
57 min – how to teach that kind of computational thinking… would love to see large number of random kids using knowledge based programming as sophisticated as anybody.. i think this is w/in reach.. what’s difficult is imagining things in a computational way…
59 min – if you’re using wolfram alpha code.. i’m responsible.. – in dna.. no one is responsible for code.. when have a designed language..
1:00 – may come time when we’ve managed to engineer things.. to design a lifelike thing that is as designed as a computer language is today. but we’re not at that point. we have to be using the molecular computer that we have, which is us and our biology. in terms of how to do that programming, it’s a super interesting question. If you look at the nanotechnology tradition, there’s been this idea of how do we achieve nanotechnology ..
we take technology as we understand it on a large scale today and we make it very small.
we say.. how can we make a cpu chip that is on an atomic scale.. maybe we’ll make it mechanically, but fundamentally we’re using same architecture as cpu chip that we know and love.. that isn’t the only approach one can take. a lot of the things i’ve done, looking at simple programs and what they do, suggest that you can have even very simple impoverished components, and with the right compiler, effectively, you can make them do very interesting things… compilation step not as gruesome as one might expect..
1:03 – might ask.. how can i compile program that i might care about down to that turing machine. i haven’t done that, but i think one will find a layer of nasty messy machine code, and then above that, it gets pretty simple. that layer of nasty messy machine code will add some inefficiency, maybe factor of 10,000, but factor of 10,000 is nothing when dealing w/scale of molecules as compared to large-scale things.
1:05 – how knowledge is transmitted.. 0\ genetic 1\ neuro logic transmission 2\ natural language – represent knowledge – communicate brain to brain (most important invention) 3\ knowledge based programming – take representation of world.. in precise/symbolic way.. understandable/executable by brain – i’m pretty sure this is a big deal.. just as language gave us civilization.. what will kbp give us…
1:08 – on question i’m super interested in .. this 4th level of knowledge communication.. what should we be imagining right now… ie: menu in code.. we change parts; language gave/accelerated bureaucracy.. what does that look like when most people can code.. how does coding world relate to cultural world..
1:10 – hs coding.. tacked onto others.. or … rethink all other areas.. ie: how do we study history w/code… imagine writing essay… et al
whoa. so not imagining.. imagining us still in school.. taking courses… writing essays…?
1:11 – no longer sterile.. knitted into language.. ie: math – basic math fits into all places.. not so much in humanities.. so similarly computation is basic way we should think about things.. then things become immediately executional..
1:12 – a kid can get fancy machine to do work as much as fancy researcher…
1:13 – intelligence and computation are kind of the same thing… so this question of .. does it have a purpose/goal.. you can ask about anything..
unpacking it: 1\ can you tell if the thing has a purpose.. given history.. easy to recognize purpose 2\ what do you see on earth that shows intelligence.. ie: straight line in great salt lake.. longest straight line made of lights…. road in australia.. rr in russia… nz.. perfect circle.. 3\ how to tell if et has a purpose
1:20 – one identifier for purpose is if it is minimal in achieving that purpose.. problem is.. most of what we have built is not that way.. ie: cpu chip..
1:23 – important question.. but very messy… ie: if we observe primes generated..we say.. what generated these… did it need whole history of civilization.. i don’t think there’s an abstract meaning.. sense of purpose.. i think there is no meaningful sense of abstract purpose… ie: how disappointing to go through all this and find an answer..
1:26 – much of science has been about short cutting… about nature.. .. end point w/o steps.. good news for us having meaningful lives (bad news for science).. there isn’t a way to short cut everything.. that’s why history means something… bad for science.. can’t make predictions.. the thing that has to be special about us .. is all these details about us… not some big abstract difference.. between us and clouds.. it’s rather a detailed thing.. of long process..
problem of abstract ai – when does a thing have a purpose.. when is a thing intelligent..
1:30 – there isn’t a bright line between intelligence and computation… rather, it’s a detailed difference that this brain-like thing was produced by this long history of civilization, etc, whereas this cellular automaton was just created by my computer in the last microsecond.
here’s one of my scenarios i’m curious about. let’s say there’s a time when human consciousness is readily uploadable into digital form..and pretty soon we have a box of a trill souls….in box.. hopefully nice molecular computing.. maybe it’ll be derived from biology in some sense.. but maybe not, but all kinds of molecules doing things, electrons doing things… box is doing all kinds of elaborate stuff…..then look at rock next to box..inside rock.. all kinds of elab stuff going on…what’s diff between rock and box of a trill souls.. answer – box of trill souls has this long history. ..details derived from the history of civ and people watching videos made in 2015 or whatever. rock came from its geo history but not particular history of our civ.
this question of ..
realizing that there isn’t this distinction between intelligence and mere computation leads you to imagine the future of civilization ends up being the box of trill souls, and then what is the purpose of that..
from our current pov, ie, in that scenario, it’s like every soul is playing video games basically forever. what’s the endpoint of that..
jun 2016 – on immortality
biggest discontinuity in human history – immortality
ie: algorithmic drugs.. getting smarter than biology has ever gotten
crazy that people aren’t investigating this..
The “serious effort” mentioned in this article by Stephen Wolfram is our @archmission. https://t.co/wMI6E5s1qU
Original Tweet: https://twitter.com/Nick_Slavin/status/958944620871856128
Of course, “what’s important” depends on who’s looking at it.
When it comes to communicating knowledge on a large scale, the only scheme we know (and maybe the only one that’s possible) is to use language—in which essentially there’s a set of symbolic constructs that can be arranged in an almost infinite number of ways to communicate different meanings.
It was presumably the introduction of language that allowed our species to begin accumulating knowledge from one generation to the next, and eventually to develop civilization as we know it. So it makes sense that language should be at the center of how we might communicate the story of what we’ve achieved.
Then there are cases where it’s not even clear whether something represents a language. An example is the quipus of Peru—that presumably recorded “data” of some kind, but that might or might not have recorded something we’d usually call a language:
but with all our abstract knowledge about mathematics, and computation, and so on, surely we can now invent a “universal language” that can be universally understood. Well, we can certainly create a formal system—like a cellular automaton—that just consistently operates according to its own formal rules. But does this communicate anything?
One place where the formal meets the actual is in the construction of theoretical models for things. We’ve got some actual physical process, and then we’ve got a formal, symbolic model for it—using mathematical equations, programs like cellular automata, or whatever. We might think that that connection would immediately define an interpretation for our formal system. But once again it does not, because our model is just a model, that captures some features of the system, and idealizes others away. And seeing how that works again requires cultural context.
so.. not definable.. the defining actually ends/deadens it
Imagine an intelligence that exists as a fluid (say the weather, for example). Or even imagine an aquatic organism, used to a fluid environment. Lots of the words we might take for granted about solid objects or locations won’t be terribly useful. And instead there might be words for aspects of fluid flow (say, lumps of vorticity that change in some particular way) that we’ve never identified as concepts that we need words for.
These features then in effect define the emergent symbolic language of the neural net. And, yes, this language is quite alien to us. It doesn’t directly reflect human language or human thinking. It’s in effect an alternate path for “understanding the world”, different from the one that humans and human language have taken.
Right now most people think of the Wolfram Language mainly as a way for humans to communicate with computers. But I’ve always seen it as a general computational communication language for humans and computers—that’s relevant among other things in giving us humans a way to think and communicate in computational terms. (And, yes, the kind of computational thinking this makes possible is going to be increasingly critical—even more so than mathematical thinking has been in the past.)
thinking – designing reality’s first two revs: communication and computation.. and perhaps how that has confined/deadened us
But the key point is that the Wolfram Language is capturing computation in human-compatible terms. And in fact we can view it as in effect giving a definition of which parts of the universe of possible computations we humans—at the current stage in the evolution of our civilization—actually care about.
what if that can’t be languaged.. defined.. computed.. and by saying it can.. we never get to what we care about
Another way to put this is that we can think of the Wolfram Language as providing a compressed representation (or, in effect, a model) of the core content of our civilization. Some of that content is algorithmic and structural; some of it is data and knowledge about the details of our world and its history.
and so.. ends up being not.. what we care about
There’s more to do to make the Wolfram Language into a full symbolic discourse language that can express a full range of human intentions (for example what’s needed for encoding complete legal contracts, or ethical principles for AIs.) But with the Wolfram Language as it exists today, we’re already capturing a very broad swath of the concerns and achievements of our civilization.
this speaks to what i’m thinking… ie: code for legal contract; ethical principals for ais.. not what 7 bn people’s souls care about
by providing a whole language—rather than just individual pictures or dioramas—we’re communicating in a vastly broader and deeper way.
are we..? or are we limiting ourselves..
And that’s a lesson for our efforts now. If we put math or science facts in our beacons, then, yes, it shows how far we’ve gotten (and of course to make the best impression we should try to illustrate the furthest reaches of, for example, today’s math, which will be quite hard to do). But it feels a bit like job applicants writing letters that start by explaining basic facts. Yes, we already know those; now tell us something about yourselves!
The trailing (binary) zeros to cover the lack of precision in pulsar periods.
A major theme of this post has been that “communication” requires a certain sharing of “cultural context.” But how much sharing is enough? Different people—with at least fairly different backgrounds and experiences—can usually understand each other well enough for society to function, although as the “cultural distance” increases, such understanding becomes more and more difficult.
We might then think of defining a distance between rules to be determined by the size or complexity of the interpreter necessary to translate between them. But while this sounds good in principle, it’s certainly not an easy thing to deal with out in practice. And it doesn’t help that interpretability can be formally undecidable, so there’s no upper bound on the size or complexity of the translator between rules.
So how might one characterize a civilization and its cultural context? One way is to ask how it uses the computational universe of possible programs. What parts of that universe does it care about, and what not?
measuring things.. not possible
Yes, I’ve spent much of my life building the single example of the Wolfram Language intended for humans. And now what I’m suggesting is to imagine the space of all possible analogous languages, with all possible ways of sampling and encoding the computational universe.
A few points are obvious. First, even though it might seem more “universal,” don’t send lots of content that’s somehow formally derivable. Yes, we could say 2+2=4, or state a bunch of mathematical theorems, or show the evolution of a cellular automaton. But other than demonstrating that we can successfully do computation (which isn’t anything special, given the Principle of Computational Equivalence) we’re not really communicating anything like this. In fact, the only real information about us is our choice of what to send: which arithmetic facts, which theorems, etc.
Of course, one could imagine just “going to the source” and starting to read out the content of a human brain. We don’t know how to do that yet.
focus on wrong things/details.. let’s try something diff