Using a model called DeepMask, Facebook can extract much more information from photos than before.
Using a model called DeepMask, Facebook can extract much more information from photos than before. Facebook
SHARE
httpswww.popsci.comsitespopsci.comfilesscreen_shot_2015-11-03_at_3.01.19_pm.png
Facebook

Facebook wants it to be clear: its pushing the forefront of artificial intelligence research. But with that statement comes a reassurance: these machines are not evil, and they’re going to make life better.

In blog post today, Facebook Artificial Intelligence Research (FAIR), the division within the company dedicated to AI, detailed some of its most important achievements of 2015. Some were older and had been previously reported on, like their neural network that can recount plot points in The Lord Of The Rings, and some are new, like a working unsupervised learning model.

The company has made substantial investment in artificial intelligence in the last few years, growing a small department of 45 researchers and engineers based in New York City. Right now, Facebook uses artificial intelligence to automatically tag photos, translate text, and in Facebook M, their personal assistant that is slowly expanding to more users.

Most recently Facebook has shown off a new addition for blind users, that will “look” at pictures and describe its contents. Facebook founder and CEO Mark Zuckerberg posted a video highlighting the success of the feature on his wall today. This integrates a short-term memory into image recognition. Now the machine doesn’t just see a picture of a baby in a bathtub, but can answer questions about the photo. The video shows this being done with types of dog breeds, which means the AI can not only understand the category of species, but specific variations within the species that makes each breed different.

“We see AI as helping computers better understand the world — so they can be more helpful to people. We’re still early with this technology, and you can already start to imagine how helpful it will be in the future,” Zuckerberg writes.

Zuck is right: we’re very early in the development of artificially intelligent systems. Every announcement today, and most announcements in the past, focus on getting machines to do something as well as a human (and even that’s a stretch).

This is a status update for where we are right now in AI development. We’re in the literal infancy—our artificial neural networks are playing with blocks and making sense of digital imagery. Artificial “intelligence” is actually very dumb, but the reason why it seems so impressive is because the scale of computation far exceeds our own.

Yann LeCun, Facebook’s head of AI research, likens artificial intelligence to the car. The car can move far faster than a human, which makes it great for transportation. But a car can’t paint by itself, or tell a joke, or lift a box. We’ve engineered incredible systems for specific tasks, but we’re still miles away from anything as generalized as a human.

“Artificially intelligent systems are going to be an extension of our brains, the same way cars are an extension of our legs,” LeCunn says in the video Zuckerberg posted. “They’re not going to replace us, they’re going to amplify everything we do: augmenting your memory, giving you instant knowledge, and they’re going to allow us to concentrate on doing things that are properly human.”

That’s not to say that Facebook’s announcements today aren’t impressive. Unsupervised learning has long been an issue for AI researchers. Human learn by observing our surroundings, and by all rights, machines should be able to do the same thing. But researchers have hit snags where the machine simply doesn’t have enough internal input to make sense of all that information. One attempt has been competitive learning, where the algorithm fights itself to respond with the best answer.

But the video released today show a new approach: showing the AI a controlled environment with two outcomes. The neural network watches a stack of blocks, and judges whether they will fall. As it watches, it learns about the idea of falling and gets better as time goes on. They call this predictive learning. Facebook reports that it the neural net can now judge with up to 90 percent accuracy if the blocks will fall, and they claim that’s better than most humans.

For AI researchers, there’s big news in Facebook’s upcoming presentation at the upcoming Neural Information Processing Systems (NIPS) conference. They claim that their new system will use one tenth of the training data, and work 30 percent faster. What does that mean for the end user? In incredibly basic terms, an artificial neural network works in two steps. It looks at a bunch of information, and then can synthesize or sort new information.

httpswww.popsci.comsitespopsci.comfilesdeepmask_0.png
Using a model called DeepMask, Facebook can extract much more information from photos than before. Facebook

The initial information is called training data. To tag your photos, Facebook looks at 60 photos of you, and then understands what you look like for tagging you in the future. Using that 60 number, the artificial neural network would only need six photos. This means the same results (97 percent accuracy) with far less data.

The background system they use is called “DeepMask,” a system that simultaneously identifies objects as different than their surroundings, and then identifies features within the object to classify them. It pulls information out of imagery with a higher degree of specificity, meaning more bang for your buck per photo.

All this is just one beat of the war drum for artificial intelligence research, and Facebook rightfully sees this as a long haul.

“It will take a lot of years of hard work to see all this through,” writes Facebook CTO Mike Schroepfer. “But if we can get these new technologies right, we will be that much closer to connecting the world.”

READ MORE: INSIDE FACEBOOK’S ARTIFICIAL INTELLIGENCE LAB