It's not what we think.
It's not what we think.
SHARE
Andrew Ng and Eric Horvitz
Andrew Ng (left), Chief Scientist at Baidu and Eric Horvitz (right), Managing Director at Microsoft Research. Brad Wenner and Michael Clinard

If the internet was the birth of the digital revolution, then today’s artificial intelligence is its first baby steps toward maturity. 1

Today, A.I. researchers feed an algorithm data and painstakingly help it learn.

But to make A.I. that’s knowledgeable on a grand scale—like learning the idiosyncrasies needed to translate every human language—the software needs to learn on its own. However, researchers don’t agree on how to make that happen. One camp thinks that if we correct algorithms when they make the wrong decisions, they’ll learn to avoid bad choices and choose only the right ones. 2 In other words, we parent our A.I. until it reaches the ability to thrive on its own.

1: Eric Horvitz: “Somehow children have an amazing ability to soak in the world and learn tons of things about the world without needing someone to provide the output. The technical term for this is unsupervised learning. It’s basically just learning from A without needing B for every single input. We think a lot of humans learn just from A and not much from B. Children learn speech just by listening to speech.”

2: Andrew Ng: “Today, we create our speech recognition systems on 45,000 hours of audio data—about five years of continuous talking. I’m in awe that we can actually build supercomputers that can process five years’ worth of audio in a couple of weeks. But I’m also slightly embarrassed that our algorithms need so much data. No human brain needs five years of continually transcribed audio to learn English.”

The other camp believes learning is also informed by self-awareness, which lets humans make decisions based on their limits. They say artificial intelligence would also benefit from reflecting on its decisions.3 Algorithms could avoid bad decisions by understanding their limited abilities, as some have proved.4

But there aren’t hard feelings on research differences—the field is idealistic and collaborative,5 and competitors often share ­progress through open-source code. It’s important that they do because the ­entire industry will need to answer larger questions about the impact of their self-aware software—like where will humans still fit into a world run by A.I.?6

3: Horvitz: “No matter how poor the pieces are, at least if you have a really good layer of reflection, the system would know its limitations. It would know how good it is, and it would be bound by rationality. It would be able to understand how it’s meant to employ itself in different situations, so it would be helpful even if it weren’t perfect.”

4: Horvitz: “This evolving A.I. assistant I’ve built weaves together ­vision, natural dialogue ability, and ­generation of facial expressions that captures uncertainty at various levels. Plus a set of services that can predict—based on 10 years of data—where is Eric going to be in 10 minutes? How long will he be in his office until he leaves? Which meetings will he not attend, even though they’re on his ­calendar?”

5: Ng: “There is that attitude in the A.I. community today that we’re all in this together, in that we’re trying to build a better society using A.I. This has led to an open sharing of ideas and even software. We do what we do because fundamentally we think it will make the world a better place, so we really want to share our discoveries with other people rather than keep things secret.”

6: Horvitz: “What are the ­implications for people who might be out of the kind of jobs they’re trained for? Can we plan for that? Can we solve it? We might have to come up with ways to redistribute wealth because we know these technologies will generate more wealth. We really need to start being proactive about this and think these things through.”

This article was originally published in the September/October 2016 issue of Popular Science under the title “How Will Robots Learn.”