nick bostrom

nick bostrom.png

 

intro’d to Nick (not sure of original share) via rabbit holing in less wrong

http://lesswrong.com/about/

algo from inside

books on everything…

how bias hurts

hindsight

http://lesswrong.com/lw/im/hindsight_devalues_science/

We need to make a conscious effort to be shocked enough.

led to finding Nick as founder of future of humanity institute

https://www.fhi.ox.ac.uk/about/mission/

The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford. Based in the Faculty of Philosophy, it enables leading researchers to bring the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.

We focus our research where we think it can make the greatest positive difference. This means we pursue questions that are (a) critically important for humanity’s future, (b) unduly neglected, and (c) for which we have some idea for how to obtain an answer or a useful new insight.

esp intrigued with his/fhi’s desire to solve biggest problems.. because  – deep/simple/open enough matters huge.. ie: short

_________

ted2009- philosophical quest for biggest problems

sometimes we don’t see a problem because either it’s too familiar or too big..

1st big problem – death

2 min – on tragedy of people dying – compared to library of congress burning down..

2nd big problem – existential risk.. threat to survival

6 min – when life was fantastic… we realize just how good life can be and we wonder why can’t be like that all the time… hard to recall in a normal frame of mind

7 min – 3rd big problem… life isn’t usually as wonderful as it could be..

8 min  if we want this.. what would have to change.. the answer – we would have to change..

10 min – tech to transform human condition.. not just little gadgets.. ie: learning to read .. to arithmetic.. changes the brain..

13 min – on enhancing our deafness to certain values

16 min – to fix this 3rd problem – i think we need to develop the means to go out into this larger space..

a nother way.. ie: hosting life bits…. via self talk as data

_________

ted 2015 – what happens when our computers get smarter than we are

on what the average person is like.. compared to vision of human enhancement

5 min – potential for superintelligence.. on awakening potential of ai

algorithm ness.. relating to organism ness.. – (Robert Epstein et al)

8 min – machine intelligence is the last invention humans will need to make..

?

9 min – we need to think of intelligence as an optimisation process…

10 min – general pt: if you create a really powerful optimisation process to max for objective x….you better make sure your defn of x incorporates everything you care about..

13 min – on ai being on our side.. caring about our problems.. create an ai that uses its intelligence to learn our values.. this can happen and outcome could be good for humanity.. conditions need to be set up in just right way…

so .. focus on self-talk as data

15 min – i think we should work out the control problem in advance..

something else to do as control.. so that we will all let go enough.. trust enough.. so we can all dance/be-us

16 min – we might say.. one thing that really mattered… what to get this one thing right..

indeed.. for (blank)’s sake

_________

find/follow Nick:

his site:

http://www.nickbostrom.com/

future of humanity site:

http://www.fhi.ox.ac.uk/

fhi twitter:

https://twitter.com/fhioxford

on ted:

https://www.ted.com/speakers/nick_bostrom

wikipedia small

Nick Bostrom (English /ˈbɒstrəm/; Swedish: Niklas Boström, IPA: [ˈbuːˈstrœm]; born 10 March 1973) is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, the reversal test, and consequentialism. He holds a PhD from the London School of Economics (2000). In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, and he is currently the founding director of the Future of Humanity Institute at Oxford University.

…..Bostrom’s work on superintelligence – and his concern for its existential risk to humanity over the coming century – has brought both Elon Musk and Bill Gates to similar thinking