Computers Are Closer To Copying The Way Humans Learn

Training artificial intelligence to write letters

Share

The gold standard of artificial intelligence is a computer that can learn the same way we do as humans. For example, if you see just one toothbrush and know its use, it’s pretty easy to identify other toothbrushes. If it’s long, thin, has little bristles and a handle, we can be pretty sure it’s a toothbrush. And since we know it has to fit in a mouth, we can imagine what would be a good tool for the job, and what might not, further limiting what a toothbrush can be.

Getting machines to learn this way has been a struggle, because complex objects, like toothbrushes, have to be explained in mathematic formulas so the computer can understand it. A lot of work in machine learning, which is how we tackle artificial intelligence, is centered around how to best represent objects and ideas so computers can understand them.

New research in Science claims to have come closer to the human method of learning. Their idea: build a tiny computer program for each “learned” concept. These little programs initially explain a small concept that it’s already seen, and generates different ways to get to the same end product.

The best way to explain this is through an example. Right now this only works for very simple symbols, like handwritten letters of the alphabet.

Researchers showed their algorithm examples of handwritten letters from several ancient alphabets and how they were written, and the algorithm memorized those processes in the form of a computer program that explained how each letter was constructed. The researchers call this Bayesian Program Learning, and by showing the algorithm how a character is constructed, it then understands the different parts of each letter. In the future, it can use those parts in different ways to classify or create new characters, much like humans do.

Other computers can already do this with deep learning, a discipline within artificial intelligence that uses networks of mathematic equations to understand ideas within data. However, whereas deep learning techniques could require the machine to analyze dozens to millions of examples, the current method claims to work from a single example of an idea.

This means that one day we could have true facial recognition at any angle from just one good image of a person.

The results claimed with this method are impressive. To test how well the algorithm learned, the researchers tested it against humans. Both humans and machines were given a new character, and had to reproduce it.

Then they asked people (from Amazon Mechanical Turk) to decide which were made by humans, and which were made by machines. And they couldn’t do it. The error rate was 48 percent, just below random chance.

Can you guess which letters were made by the A.I.?

More than anything, this means that we’re still just at the beginning of learning what we can do with machine learning and artificial intelligence. And while this research is important, it doesn’t necessarily mean that this is the way all machines will learn in the future. Just as this approach can replace ways of thinking about how computers understand concepts today, it’s entirely possible someone will find a better way next month.

Each step, each paper, and each idea lights another candle to illuminate the massive void in our knowledge of intelligence and consciousness. Today we can better create handwritten characters. Maybe tomorrow it will generating human-like speech, or even recreating art with greater success.