OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.
OpenAI’s research director is Ilya Sutskever, one of the world experts in machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.
Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and yc Research are donating to support OpenAI. In total, these funders have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years.
to advance digital intelligence in the way that is most likely to benefit humanity as a whole
IBM is not trying to solve the problem I care about, which is getting access to knowledge that is easily comprehensible about problems everyday people actually have. A lot of that knowledge isn’t in any computer in the first place or is in academic journals, so all the key word search in the world really will not help the average person much.
Essentially, OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use AI to gain power and even oppress their citizenry.
Elon – we were just thinking, “Is there some way to insure, or increase, the probability that AI would develop in a beneficial way?”
I think the best defense against the misuse of AI is to empower as many people as possible to have AI. – Elon
ELON MUSK + SAM ALTMAN LAUNCH OPENAI NONPROFIT THAT WILL USE AI TO “BENEFIT HUMANITY”
Silicon Valley is in the midst of an artificial intelligence war, as giants like Facebook and Google attempt to outdo each other by deploying machine learning and AI to automate services. But a brand-new organization called OpenAI—helmed by Elon Musk and a posse of prominent techies—aims to use AI to “benefit humanity,” without worrying about profit.
Altman said he imagines that OpenAI will work with both of those companies, as well as any others interested in AI. “One of the nice things about our structure is that because there is no fiduciary duty,” he said, “we can collaborate with anyone.”
follow open ai:
a means to model how 7 billion people could leapfrog to a nother way to live.
Worth reposting the Wait But Why piece on AI. We are at the beginning of exponential growth in digital intelligence.
https://t.co/1c30ZwrxQ1Original Tweet: https://twitter.com/elonmusk/status/702534707464896512
First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not—but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body—if it even has a body. For example, the software and data behind Siri is AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.
Secondly, you’ve probably heard the term “singularity” or “technological singularity.” This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. It’s been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules don’t apply
There are three major AI caliber categories:
AI Caliber 1) Artificial Narrow Intelligence (ANI): S…specializes in one area.
AI Caliber 2) Artificial General Intelligence (AGI): …refers to a computer that is as smart as a human across the board—
AI Caliber 3) Artificial Superintelligence (ASI): …an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”
The Road From AGI to ASI
The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:
- Size and storage.
- Reliability and durability.
- Editability, upgradability, and a wider breadth of possibility.
- Collective capability.
And here’s where we get to an intense concept: recursive self-improvement. It works like this
april 2016 – inside ai – elon’s plan to set ai free:
even as the world’s biggest tech companies try to hold onto their researchers with the same fierceness that NFL teams try to hold onto their star quarterbacks, the researchers themselves just want to share.
Indeed, as LeCun explains, deep learning research requires this free flow of ideas. “When you do research in secret,” he says, “you fall behind.”
OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”
OpenAI’s idealistic vision has its limits. The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. “Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it,” Brockman says. “We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release.”
Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work.
Brockman says OpenAI will begin by exploring reinforcement learning, a way for machines to learn tasks by repeating them over and over again and tracking which methods produce the best results. But the other primary goal is what’s called “unsupervised learning”—creating machines that can truly learn on their own, without a human hand to guide them. Today, deep learning is driven by carefully labeled data. If you want to teach a neural network to recognize cat photos, you must feed it a certain number of examples—and these examples must be labeled as cat photos. The learning is supervised by human labelers. But like many others researchers, OpenAI aims to create neural nets that can learn without carefully labeled data.
At OpenAI, Brockman wants to make everyone privy to its research.
but not part of it..?
This month, hoping to push this dynamic as far as it will go, Brockman and company snagged several other notable researchers,….“The thing that was really special about PARC is that they got a bunch of smart people together and let them go where they want,” Brockman says. “You want a shared vision, without central control.”
Giving up control is the essence of the open source ideal.
we can do better – we need all 7 bill playing.. not just ones we guess know all.. what if all those smart ones are missing the very key piece to.. let us let go.
Musk, Altman, and Brockman are placing their faith in the wisdom of the crowd. But if they’re right, one day that crowd won’t be entirely human.
but they’re not.. they’re missing mega capacity w/limited crowd
on ai/algo ness (more on Greg‘s page)
@DigitaltontoBitcoin May Not Survive, But The Technology Behind It Will Live On – digitaltonto.com/2016/bitcoin-m…
One way to solve the Byzantine Generals Problem is by establishing a trusted third party, which is the role that governments and other institutions traditionally play in financial transactions. Yet a third party is not a true solution, because there’s always the possibility that the third party can be corrupted as well. In effect, it merely assumes away the problem.
Blockchains solve the trust protocol by creating a distributed ledger, almost as if the generals could constantly refer to an encrypted Reddit post that they all had access to. What makes the technology so exciting is that there is a wide variety of areas beyond financial transactions where trust is important.
so first para… we go ginormous small and trust each individual… in the moment…
but huge key.. that many miss/fear… is that we are trusting humanity.. ie: are you human..?.. ok.. i trust you… not ie: here’s the measurement/validation of that transaction… trust that
perhaps our hearts were made to trust people.. but not so much to trust measurement… that’s man-made… and very subjective.
also.. transaction cant tell us their story.. even on ledgered blockchain… because its algo…
but.. people can tell us their story.. (if we care enough to listen… otherwise… no grounds to judge.. so save energy and just carry on.. trusting that people are good)
i know you ness.. is about people… as is.. enough… not about measuring transactions
Traditionally, one major way that we have legislated trust in our society is through contracts, which codify obligations, penalties and the jurisdiction whose laws will enforce the agreement. These can be incredibly cumbersome documents, often running to hundreds of pages.
Consider the case of a building project, in which a general contractor must sign agreements with hundreds of subcontractors. To enforce these contracts, work must be inspected and if it passes muster, it goes to an accounts payable department, which authorizes payment and instructs a bank to wire the proper amount. The process usually takes at least a few weeks.
However, a smart contract powered by a blockchain can streamline the process through automation. Using a simple tablet computer, an inspector can instantly activate the smart contract, which has all of the provisions of agreement embedded within it, to arrange settlement, including payment.
*share – @jhagel: Just the beginning – global law firm announced it has hired a robot lawyer to work in the firm’s bankruptcy practice for.tn/1Wwrywt
and this ..
*share – Racist AI putting people in prison. Superb reporting by @JuliaAngwin @ProPublica https://t.co/qSj694uFKM
Original Tweet: https://twitter.com/trevorpaglen/status/735150486865608704
the tech we have means.. time to et go… and leap… for blank’s sake
Artificial Intelligence Is Far From Matching Humans, Panel Says https://t.co/FLNobiNFW9
Original Tweet: https://twitter.com/jkleske/status/735803228462407681
Your Silicon Valley heroes, Ladies & Gentlemen https://t.co/9AuRaSNAdv
Original Tweet: https://twitter.com/jkleske/status/735687749433335809The Silicon Valley Power Players Who Might Hate Gawker Enough to Try to Kill It fusion.net/story/306626/s…
Co.Design (@FastCoDesign) tweeted at 4:38 AM – 12 Dec 2016 :
Teaching AI to play video games could make it much smarter https://t.co/vzC2rxQOpZ https://t.co/nUAjHG3E14(http://twitter.com/FastCoDesign/status/808274715349553152?s=17)
“It’s not just games,” says OpenAI’s Catherine Olsson, an alum of MIT’s Brain & Cognitive Science group. “Our goal is that anything a human can do *on a computer, an AI agent should also be able to do.”
*on a computer
In a way, Universe removes the subjective guesswork, because if someone develops an AI that can play video games as well as it can book a flight, “it must have general intelligence.” And if it has general intelligence, an AI is on the path to being *as good as a person.
*as good as a person
i thought it was just anything on a computer..
That opens the door to all sorts of possibilities. Digital assistants who are as good as people at scheduling your tasks,
so not.. as good as a person.. as good as a person..scheduling things on a computer.. huge diff
Universe won’t teach AIs to do these things by itself. Computer researchers will still need to figure out how to program them to be *as good at learning as humans. But it will provide the obstacle course. And if everything goes well? It could help establish the first **truly “human“AIs,
seems.. whatever we can teach ai to do… isn’t *as good at learning as humans…as in being curious plus.. the pleasure of finding things out
so then.. not **truly human
10 bn mbytes of new info every sec..shrinking attention/cog ability to keep up add to problem
The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.
Original Tweet: https://twitter.com/sama/status/1284922296348454913
spinning wheels (not changing world) until we let go of intelligence ness
‘seen advances in every aspect of lives except our humanity‘– Mufleh
let’s focus on interconnectedness
‘when understand interconnectedness..makes you more afraid of hating than of dying’– @
@danielbigham One thing I’ve updated towards thinking is that the Turing test is less interesting than it seems. A big moment for me will be when AI can prove a new mathematical theorem.
Original Tweet: https://twitter.com/sama/status/1284928645073481728
Informative overview by @strwbilly about @OpenAI’s just-released freestyle text generator, GPT-3.
Examples of where it impresses (it can “build” a web page!) and where it doesn’t (there’s no way it could have written, say, an article like this without copying and pasting it). https://t.co/TkZFoVEDhV
Original Tweet: https://twitter.com/zittrain/status/1285558293944061953
from cto (of open ai) greg brockman:
DALL-E — our new neural network for generating images from text:
Original Tweet: https://twitter.com/gdb/status/1346554999241809920