this is what i’d call the ai humanity needs..
Roger Schank (@rogerschank) tweeted at 9:11 AM on Tue, Jul 24, 2018:
All there is to know about learning and AI (via a brief story) https://t.co/QgTf9Kxbxn
same post here –
learning starts with curiosity (and do does AI) https://t.co/rvabihUZtv
Original Tweet: https://twitter.com/rogerschank/status/1022095077840248832
pulling out the pieces that resonate and responding to that (so not necessarily roger’s thinking..i hope it could be..)
- learning starts with a conversation/curiosity.
Listening can only work if the listener is curious too. A listener may not be curious about what the speaker is curious about but the speaker is trying to make the listener curious about something.
but the listener (after the mech listens in order to match).. would (potentially) be curious about the same thing as the speaker.. imagine that energy
2\ If they succeed the listener will attempt to find in their memory something that they have experienced so the listener can respond to the speaker with a story of their own, satisfying the goals of each. How might we do that? Modern AI doesn’t even ask this kind of question oddly.)
maybe we don’t need it to.. and maybe at that point we no longer call it ai..? i don’t know.. i just think that what we need most is for the mech (ai or not) to listen to 7bn curiosities .. everyday.. and then match them up locally.. and that’s where the convo .. the deeper listening .. can take place.. 2 or more locals w very similar if not the same curiosities.. that day
3\ Matching underlying goals and plans is a kind of pattern matching but pattern matching in AI these days tends to be about words or pixels and not about ideas. It is hard for a computer to pattern match ideas, so when we talk about how computers can learn we must be very sceptical about the kinds of things they are matching.
so imagine all we need the mech/ai to do is match curiosities.. close to ideas.. but not totally.. if we iterate often enough (everyday anew).. and it takes little energy (3 min self-talk) .. seems that a word match would be plenty for a revolution of everyday life..
4\ Explanations are the basis of understanding. Bob was searching for an explanation.
first a match.. then an explanation..
He unconsciously constructed an explanation: maybe the actors didn’t want to accede to the request because they thought that the request was too extreme.
maybe ai (less judgmental/assuming/constructing) would be better at that first swatch of matching..?
6\ But what do we match on? Certainly not words or pixels. We match on high level abstractions like goals, plans, and intentionality. My goal was to eat the way I like. Bob’s goal was to look the way he wanted. But at a higher level of abstraction my goal was to get someone to do something for me and so was his. So any explanation would have had to have been about convincing other people to do what we wanted. That kind of goal (how to convince someone) was never actually discussed but that is what we were both curious about and such goals drive learning.
we could assume everyone’s goal is just to find someone local with a similar curiosity that day.. so thinking word(s) may be enough
you will never make computers intelligent by focussing on words, no matter how well you can count them or match them. Everything starts with goals and the ideas that underlie them. Dogs have goals but they don’t have words. Amazingly, dogs can think intelligently about getting what they want. When modern AI can do what dogs do every day in order to achieve the real goals that they have, please let me know.
see.. i don’t think we need that (and i don’t think a machine can ever do that).. i think all we need is a non judgmental listening ear and a word matcher.. aka: augmenting interconnectedness
more from Roger:
Cognitive computing is not cognitive at all » Banking Technology bankingtech.com/829352/cogniti…
People learn from conversation and Google can’t have one.
DougEngelbot (@DEngelbot) tweeted at 6:15 AM – 25 Nov 2018 :
Our concept of language as one of the basic means for augmenting the human intellect embraces all of the concept structuring which the human may make use of. #augmentintellect (http://twitter.com/DEngelbot/status/1066681881746423808?s=17)
It never should have been a surprise that computers would be (much, much) better at chess and go than humans. It’s like being surprised that computers are better at math. Now the question is what to do w/ deeply permutational human culture, post computer. https://t.co/MLI1D79KAq
Original Tweet: https://twitter.com/ibogost/status/1078665118005772290
This ought to be a huge identity crisis for at least one (small) set of human actors: Game designers and players. How do you design a game that a computer is *bad* at in a structural, material way? And not just today’s “classical” computers but quantum computers too?