do you trust computer

____________

free to stream (1:17) courtesy of elon musk (till 4.8.18)

http://doyoutrustthiscomputer.org/watch

notes/quotes:

2 min – we created it.. so this intelligence will contain part of us.. i think the question is.. will it contain the good parts or the bad parts

trust

3 min – jonathan nolan (writer/director – to me.. seems like the one who speaks/appears most often): we’ve cried wolf long enough that the public has stopped paying attention.. because it feels like sci fi (on sci fi films).. but it’s not.. the general public is about to get blind sighted by this

4 min – john markoff (nyt): as a society.. surrounded.. by this tech that we make decisions with.. baited by a set of algos that we have no understanding

max tegmark (mit): we’re already pretty jaded by the idea we can talk to our phone and it understands us.. 5 yrs ago.. no way

5 min – sebastian thrun: self driving cars

eric horvitz: i’ve lost many family members to auto accidents.. it’s pretty clear we could almost eliminate car accidents.. 30 000 in us.. mn around the world.. in a year

david ferrucci (ibm watson): in health care.. can save lives..

brian herman: cancer center – all the things the radiologist brain does in 2 min .. the computer does instantaneously

6 min – shivon zilis (open ai): understanding our genetic code.. to diagnose disease and create personalized treatments

ray kurzweil: primary purpose of these machines will be to extend our own intelligence..

stuart russel: how could a smarter machine not be a better machine.. it’s hard to say when i started to think that was bit naive

7 min – uc berkely

gautham kesineni (student): stuart.. basically god in ai.. her wrote the book that almost every uni uses..

stuart russel: i used to say it was the most intelligent text book.. now i just say it’s the pdf that’s stolen most often.. ai is about making computers smart.. and from the pov of public.. what counts as ai is just something that’s surprisingly intelligent compared to what we thought computers would typically be able to do

john markoff: ai is a field of research that tried to basically simulate all kinds of human capabilities..we’re in the ai era.. sv has the ability to focus on one bright shiny thing.. it was social networking/media over the last decade and it’s pretty clear that the bit has flipped

8 min – jonathan nolan: when we look back at this moment.. when was the first ai.. it’s not sexy.. and it isn’t the thing we consider in the movies..  but you’d make a great case that: google .. created a search engine.. a way for people to ask any question they wanted and get the answer they needed

stuart russel: most people aren’t aware that what google is doing is actually a form of ai

elon musk: w each search we train it to be better.. sometimes you type in the search and it tells you the answer before you finish asking the question.. didn’t used to be able to do that

jonathan nolan: that is ai.. years from now we will try to understand.. how did we miss it

9 min – john markoff: it’s one of these striking contradictions that we’re facing.. google and fb et al have built businesses based on giving us as a society free stuff.. but it’s a faustian bargain.. they’re extracting something from us in exchange.. but we don’t know what code is running on the underside and why.. we have no idea.. it does strike right at the issue of how much we should trust these machines

10 min – jerry kaplan (stanford): people don’t realize they are constantly being negotiated w by machines.. whether price in amazon cart.. flight.. hotel.. what you’re experiencing are machine learning algos that have determined that a person like you is willing to pay 2 cents more and is changing the price

michal kosinski (stanford): a computer looks at millions of people simultaneously.. for very subtle patterns you can take seemingly innocent physical footprints such as playlist on spotify or amazon purchase.. then use algo to translate this into very detailed/accurate/intimate profile

jerry kaplan: there is a dossier on each of us that is so extensive it would be possible accurate to say that they know more about you than your mother does

well.. since most of us are not us.. not really

11 min – max tegmark: major cause of the recent ai breakthru isn’t just that some dude had a brilliant insight all of a sudden.. but simply that we had much bigger data to train them on in vastly better computers

rana el kaliouby (affectiva): the magic is in the data.. it’s a ton of data.. i mean it’s data that’s never existed before.. we’ve never had this data before..

which is what is off.. (because we are off).. begs we focus on self-talk as data.. again to your point.. data we’ve never had before.. but.. would be more accurate/useful/humane.. as it could be

sean gourley (primer): we’ve created techs that allow us to capture vast amounts of info.. think of bn cell phones on planet.. we’re all giving off huge amounts of data individually.. cars.. cameras.. satellites.. climate.. nsa.. geopolitical situation.. the world today is literally *swimming in this data

cool.. but if data is not legit (to alive people).. energy wasted.. imagining one chip.. doing more than we can imagine.. exponentially more (bartlett expo law)

ie: our health.. yeah it’s helping us now.. but imagine if we weren’t sick to begin with.. same with oceans.. climate.. et al.. we have no idea the potential we’re missing .. because we’re focusing on the wrong data.. not even going into nsa.. war..et al..

gershenfeld sel could get us whales out of sea world..  rather out of the *quagmire.. ie: we’re *swimming in the compromise

12 min – michal kosinski: back in 2012 ibm estimated an avg human being leaves 500 megabytes of digital footprints everyday.. if you wanted to back up only 1 day worth of data.. and print out on letter sized paper.. double sided.. and stack it up.. it would reach the surface of the sun 4x over.. everyday

bartlett expo law

13 min – jerry kaplan: *the data itself is not good or evil..  **it’s how it’s used.. ***we’re relying on the goodwill of these people and on the policies of these companies.. there is no legal req for how they can/should use that kind of data

*worst is it’s not legit (aka: not us).. *it’s actually that we’re using it.. spending our energy/time on it.. ***actually relying on manufactured consent.. voluntary compliance .. et al .. more than anything

john markoff: that to me is at the heart of the trust issue

james barrat (our final invention): right now there’s a giant race for creating machines that are as smart as humans: google.. really kind of the manhattan project of ai.. they’ve got the most money/talent.. they’re buying up ai/robotics co’s

tim urban (wait but why): people still think of google as a search engine and their email provider and a lot of other things we use on a daily basis.. but behind that search box are 10 mn servers.. that makes google the most powerful computing platform in the world.. google is now working on an ai computing platform that will have 100 mn servers.. so when you’re interacting w google..we’re just seeing a toe nail of something that is a giant beast in the making.. and the truth is.. i’m not even sure that google knows what it’s becoming

bartlett expo law

14 min – d scott phoenix (vicarious): if you look inside of what algos are being used at google.. it’s tech largely from the 80s.. so these are models that you train by showing them a 1 a 2 and a 3.. and it learns .. not what a 1 is or a 2 is.. it learns what the diff between a 1 and a 2 is.. it’s just a computation

john markoff: in the last half decade where we’ve made this rapid progress.. it has all been in pattern recognition..

pattern recognitions off non legit data (if we’re talking human\e energy/potential.. another version of wrong fractal/pattern

max tegmark: most of the old fashioned ai was when we would tell our computers how to play a game like chess.. from the old paradigm where you just tell the computer exactly what to do

15 min – david ferrucci: no one at the time thought a machine had the precision/confidence/speed to play jeopardy well enough against the best humans

ray kurzweil: watson actually got its knowledge by reading wikipedia.. in 200 mn pages of natural language documents

david ferrucci: you can program every line of how the world works.. machine has to learn by reading

16 min -d scott phoenix: watson is trained on huge amounts of text.. but it’s not like it understands what it’s saying.. it doesn’t know that water makes things wet by touching water.. and by seeing the way things behave in the world by seeing the way that you and i do

david ferrucci: a lot of language ai today is not building logical models on how the world works.. rather it’s looking at how the words appear in the context of other words

rowson mechanical law

james barrat: david ferrucci developed ibm’s watson and somebody asked him.. does watson think.. and he said.. does a submarine swim.. and what he meant was.. when they developed submarines they borrowed basic principles of swimming from fish.. but sub swims farther/faster than fish.. outswims fish

andrew ng (google brain): watson winning jeopardy will go down in history of ai as significant milestone.. we tend to be amazed when a machine does so well.. i’m even more amazed when a computer beats humans at things that humans are naturally good at.. this is how we make progress..

?

17 min – andrew ng: in the early days of the google brain project i gave the team a very simple instruction.. which was.. build the biggest neuro network possible.. w 1000 computers

elon musk: a neuro net is something very similar in relation to how the brain works.. it’s very probabilistic.. but with contextual relevance

tim urban: in your brain you have long neurons that connect to thousands of other neurons and you have these pathways that are formed/forged based on what the brain needs to do.. when a baby tries something and succeeds.. there’s a reward.. and that pathway that created the success is strengthened.. if it fails at something.. the pathway is weakened.. so overtime the brain becomes honed to be good at the environ around it

andrew ng: really it’s just getting machines to learn by themselves.. it’s called deep learning.. and deep learning and neuro networks mean roughly the same thing

max tegmark: deep learning is a totally diff approach.. where the computer learns more like a toddler.. getting a lot of data and eventually figuring stuff out.. the computer just gets smarter and smarter as it has more experiences..

18 min – andrew ng: so imagine .. a neuro network of 1000 computers.. and it wakes up not knowing anything and we made it watch youtube for a week..  after week.. found neuron that learned to detect human faces – since show up a lot in youtube videos.. also neuron that had learned to detect cats

19 min -james barrat: it’s all pretty innocuous when you think about the future.. it all seems kind of harmless/benign.. but we’re making cognitive architectures that will fly farther/faster than us and carry bigger load.. and they won’t be warm/fuzzy

david ferrucci: in 3-5 yrs.. you will see a computer system that will be able to autonomously learn how to understand.. how to build understanding.. not unlike the way the human mind works

i don’t know

on millions becoming jobless

21 min -jonathan nolan: a job isn’t just about money.. on a biological level.. it serves a purpose.. it becomes a defining thing.. when the jobs went away in any given civilization.. it doesn’t take long until that turns into violence

that wasn’t because the job was gone.. it’s because as you say.. the intoxication that the job defines a person..

22 min – james barrat: we face a divide between rich and poor because that’s what automation and ai will provoke.. a greater divide between the haves and have nots

depends on what we automate.. depends on what data we use

24 min -brian herman: it seems we are feeding and creating it .. but in a way we are a slave to the tech because.. we can’t go back (after.. dr now does only 1 open hysterectomy a year.. afraid he can’t remember how)

25 min -sean gourley: the machines detect bigger and bigger bites out of our skill set at an ever increasing speed.. so we’ve got to run faster and faster to keep ahead of the machines

jonathan nolan: this is the future we’re heading into.. we want to design our companions.. we’d like to see a human face on the eye (?).. therefore gaming our emotions will be depressingly easy

26 min -rana el kaliouby: i started thinking.. what if this device could sense i was distressed or having a bad day..what could that open up.. (then going into first grade classroom)

29 min – osaka japan – human like robots – intention and desire

?

31 min – sean gourley: i think the key point will come when all the major senses are replicated.. when we replicate our senses.. is that when it becomes alive

jonathan nolan: so many ai being built to understand us.. but what happens when ai can adjust their courage/efforts/cunning..

32 min – stuart russel: the thing that worries me.. that keeps me awake right now.. is the development of autonomous weapons.. drones.. to fully automated that chose own targets

34 min – sean gourley: ai will have as big an impact on military as combustion engine had at turn of cent.. it will literally touch everything that the military does.. so .. whoever has best ai will probably achieve dominance on this planet

35 min – peter singer: long history of sci fi not only predicting but shaping the future

36 min – peter singer: reason us joined ww1.. german’s using subs.. thought that was horrific.. but move timeline forward to pearl harbor.. 5 hrs after pearl harbor.. order goes out for unrestricted sub warfare against japan

37 min – christine fox (sect of defense): the role of intelligent systems is growing very rapidly in warfare.. everyone is pushing in the unmanned realm

38 min – stuart russel: if you make these weapons.. they’re going to be used to attack human populations in large numbers.. autonomous weapons by nature are weapons of mass destruction because doesn’t need human to guide/carry it.. only need one person to write a little program..

christine fox: it just captures the complexity of this field.. amazing.. rewarding.. but also frightening.. it’s all about trust

39 min – stuart russel: (on letter being signed by 6000 to ban autonomous weapons) i’m getting a lot of visits .. from high ranking officials.. who wish to emphasize that american military dominance is very important and autonomous weapons may be part of the defense dept’s plan.. that’s very very scary.. because the value system of military developers of tech is not the same as the value system of the human race..

40 min – future of life institute to grapple w these concerns.. all these people secretive.. so interesting to see them all together

sean gourley: sitting around table w best/smartest minds in world.. and .. what really struck me was .. maybe human brain is not able to fully grasp the complexity of the world we’re confronted with

or maybe.. our roadblock is this .. best in world.. mindset.. we’re missing all the other parts .. we need all the parts of us.. perhaps that’s the beginning of ai mindset.. people already controlling others

41 min – elon musk: what makes deep mind unique.. it’s focused on creating digital super intelligence.. faster.. and .. smarter than all humans on earth combined

43 min – tim urban: (on rapid/expo advancement of deep mind playing go) people say that’s just a board.. poker involves reading people’s lying/bluffing.. it’s not an exact thing.. that will never be a computer.. you can’t do that.. they took best poker players in world and it took 7 days.. the pattern here is that ai might take a little while to wrap its tentacles around a new skill but when it does.. it’s unstoppable..

44 min – elon musk.. deep mind has admin access to google’s servers.. this could be an unintentional trojan horse.. w little update.. that ai could take control of whole google system.. which means they could do anything.. look at all your data.. anything..

begs we shift/leap to diff data.. so that doing anything ness becomes irrelevant.. ie: gershenfeld sel

james barrat: problem is.. we’re not going to reach human intelligence and stop.. we’re going to go beyond .. called super intelligence.. anything smart than us

? don’t think so

max tegmark: ai at super human level.. if we succeed w that will be by far most powerful invention every made and .. the last invention we’ll ever have to make.. and if we create ai smarter than us.. we have to be open to possibility of losing control to them

45 min – elon musk: ai doesn’t have to be evil to destroy humanity.. if ai has a goal and humanity happens to be in the way.. will destroy humanity as a matter of course

48 min – brian herman: trust is such a human experience.. i have a person coming in w/aneurism.. they want to look in my eyes and know they can trust this person w their life.. on agonizing 10 min over an operation because i know things.. that a computer doesn’t.. it just does the thing.. i wanted the ai in this case.. but can ai be compassionate.. we are the soul embodiment of humanity and it’s a stretch for us to accept that a machine can be compassionate/loving in that way..part of me doesn’t believe in magic.. but part of me has faith that there is something beyond the sum of the parts.. that there’s at least a one ness in our shared ancestry/biology/history.. some connection there.. beyond machine

one ness

rowson mechanical law

51 min – brian herman: other side of that.. does computer know it’s conscious or can it be or does it care.. does it need to be conscious/aware

53 min – hod lipson (columbia uni): back in 2005 – started trying to build machines w self awareness.. ie: robot learned to track human faces w/o us programming it..something else going on there.. not just programming

54 min – eric horowitz: i’m not sure it (worrying) is going to help

55 min – jerry kaplan: nobody knows what if means for a robot to be conscious.. there is no such thing.. the truth is.. machines are natural psychopaths.. ie: in a mater of minutes.. you’ll have trillions of dollars

56 min – justin wisz (vestorly) – the short thing of what happened it (market crash).. algos responded to algos.. compounded on itself over and over again.. no matter

57 min -jonathan nolan: ai in financial are so primed for manipulation.. no regulatory could keep up w it..

so let’s disengage from finances

jerry kaplan: if you give them a goal they will relentlessly pursue that goal.. how many programs like this.. nobody knows

automating ineq 

michal kosinski: one of the fascinating aspect about ai in general is that no one really understand how it works.. even the people who create ai don’t really/fully understand.. because it has millions of elements.. it becomes completely impossible for a human being to understand what’s going on

58 min – hannes grasseger (econ journal): microsoft set up this ai .. tay on twitter.. a chatbot.. tay.ai.. they started out in the am.. because of trolls.. w/in 24 hrs.. microsoft bot became a terrible person.. they had to literally pull tay off the net.. nobody had foreseen this

michal kosinski: the whole idea of ai is that we are not telling it exactly how to achieve a given outcome/goal.. ai develops on its own

jonathan nolan: we’re worried about super intelligent ai.. the master chess player that will out maneuver us.. but ai won’t actually have to be that smart to have massively destructive effects on human civilization.. we’ve seen over last cent.. it doesn’t take genius to knock history off in another direction.. and it won’t take a genius ai to do the same thing

59 min – bogus elections

fb is really the elephant in the room..

fb

michal kosinski: ai running fb newsfeed.. the task for ai is.. keeping users engaged.. but no one really understands exactly how this ai is achieving this goal

sounds like voluntary compliance ..manufactured to consent.. et al..

jonathan nolan: fb is building an elegant mirrored wall around us.. a mirror that will ask.. who’s the fairest of them all.. and it will answer.. you you you . . time and again.. slowly begin to warp our sense of reality/politics/history/global-events.. until determining what’s true.. not true.. is virtually impossible..

1:00 – michal kosinski: the problem is that ai doesn’t understand that.. ai just had a mission: max user engagement.. and it achieved that.. even fb engineers want to get rid of fake news.. but how do you.. if you can’t read it all personally

jonathan nolan: it’s not terribly sophisticated.. but it is terribly powerful. what it means is that your view of the world.. which 20 yrs ago was determined by if you watched the nightly news.. 3 diff networks/anchors.. largely could agree on objectality.. well that objectivity is gone.. fb has completely annihilated it

1:01 – jonathan nolan: if most of your understanding of how the world works is derived from fb.. facil’d by algo software that tries to show you the news you want to see.. that’s a terribly dangerous thing.. the idea that we have not only set that in motion.. but allowed bad faith actors access to that info

1:02 – jonathan nolan: cambridge analytica emerged quietly .. as a co that .. according to its own hype.. has the ability to use this tremendous amount of data in order to affect societal change (4-5 000 data points on every individual in the us)..weapon used in totally wrong direction.. bullet before gun

ca

1:04 – jonathan nolan: trump and brexit.. ai to create.. 2 of most ground shaking pieces of political change in last 50 yrs.. if we believe the hype.. connected to a piece of software created by a prof at stanford

1:05 -michal kosinski: back in 2013 – i warned against this

1:06 – hannes grassegger: what Michal had done is gather the largest data set ever of how people engage on fb

michal kosinski: our idea was that instead of tests/questions/surveys.. we could simple look at behaviors that we are all living behind to understand .. openness, conscientiousness, neuroticism

hannes grassegger: you can easily buy personal data.. where you live, clubs you’ve joined..

michal kosinski: first from small amount of data.. but now just from profile of face.. could be very dangerous

1:10 – shivon zilis: we want good ways to interact w this tech.. so it ends up augmenting us

elon musk: it’s incredibly important that ai not be other.. *it must be us.. i could be wrong about what i’m saying.. i’m **certainly open to any ideas if someone can suggest a path that’s better.. but i think we’re really going to have to.. even merge w ai or left behind..t

*begs self-talk as data

**how open.. ie: can you hear me elon

1:11 – the least scary future i can think of is one where we have at least democratized ai.. at least when evil dictator.. human is going to die.. but w ai.. there would be no death.. then an immortal dictator from which we can never escape

been experiencing that for some time.. just less visible..

black science of people/whales

1:13 – pursuit of ai is a mutli bn dollar industry w almost no regulations

____________

Advertisements