New ‘OpenAI’ Artificial Intelligence Group Formed By Elon Musk, Peter Thiel, And More

A Silicon Valley supergroup has united to keep AI open -- so they say
Elon Musk speaks ahead of COP 21 Paris Climate Talks
ASSOCIATED PRESS/Francois Mori

Share

In the last few years, the world of artificial intelligence has mainly been dominated by large internet companies with huge computing infrastructures like Google and Facebook, or research universities like MIT or Stanford.

Now, there’s another player in town: OpenAI. The non-profit research firm is backed by heavy hitters like co-chairs Elon Musk (of SpaceX and Tesla fame), Y Combinator’s Sam Altman, as well as investor Peter Thiel (who worked with Musk at PayPal). They claim to have garnered a billion dollars in private funding, from people like Thiel and Amazon Web Services.

“We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely,” OpenAI writes in its first blog post, published just a few moments ago.

The goal? Make the scope of A.I less narrow. Right now, machines are either good at identifying people, or answering questions, but not both. But the ultimate goal in A.I research is to “generalize” intelligence—have an algorithm that can do it all.

In pursuit of that, OpenAI’s founding team hired Ilya Sutskever as research director. Sutskever is currently a research scientist at Google who has worked with some of the most well-known names in A.I. and machine learning, like Geoff Hinton and Andrew Ng (who work with Google and Baidu respectively).

The organization is a non-profit, and only hopes to spend a small fraction of their billion dollar seed in the next few years. They hope to “freely collaborate” with other institutions, which makes sense, as nearly everyone on their research team comes from a prestigious institution like Google, Stanford, and New York University.

Musk’s involvement in particular is noteworthy, given the SpaceX founder has previously expressed fears that artificial intelligence could be more dangerous than nuclear weapons. OpenAI would appear to be in part an effort to power-check the development of A.I. going forward.

Developing…