Engineers finally peeked inside a deep neural network

Nineteenth-century math can give scientists a tour of 21st-century AI.
An illustration of a circuit in the form of a human brain.
Neural networks may be viewed as black boxes, even by their creators. Deposit Photos

Share

Say you have a cutting-edge gadget that can crack any safe in the world—but you haven’t got a clue how it works. What do you do? You could take a much older safe-cracking tool—a trusty crowbar, perhaps. You could use that lever to pry open your gadget, peek at its innards, and try to reverse-engineer it. As it happens, that’s what scientists have just done with mathematics.

Researchers have examined a deep neural network—one type of artificial intelligence, a type that’s notoriously enigmatic on the inside—with a well-worn type of mathematical analysis that physicists and engineers have used for decades. The researchers published their results in the journal PNAS Nexus on January 23. Their results hint their AI is doing many of the same calculations that humans have long done themselves.

The paper’s authors typically use deep neural networks to predict extreme weather events or for other climate applications. While better local forecasts can help people schedule their park dates, predicting the wind and the clouds can also help renewable energy operators plan what to put into the grid in the coming hours.

“We have been working in this area for a while, and we have found that neural networks are really powerful in dealing with these kinds of systems,” says Pedram Hassanzadeh, a mechanical engineer from Rice University in Texas, and one of the study authors.

Today, meteorologists often do this sort of forecasting with models that require behemoth supercomputers. Deep neural networks need much less processing power to do the same tasks. It’s easy to imagine a future where anyone can run those models on a laptop in the field.

[Related: Disney built a neural network to automatically change an actor’s age]

AI comes in many forms; deep neural networks are just one of them, if a very important one. A neural network has three parts. Say you build a neural network that identifies an animal from its image. The first part might translate the picture into data; the middle part might analyze the data; and the final part might compare the data to a list of animals and output the best matches.

What makes a deep neural network “deep” is that its creators expand that middle part into a far more convoluted affair, consisting of multiple layers. For instance, each layer of an image-watching deep network might analyze successively more complex sections of the image.

That complexity makes deep neural networks very powerful, and they’ve fueled many of AI’s more impressive feats in recent memory. One of their first abilities, more than a decade ago, was to transcribe human speech into words. In later years, they’ve colorized images, tracked financial fraud, and designed drug molecules. And, as Hassanzadeh’s group has demonstrated, they can predict the weather and forecast the climate.

[Related: We asked a neural network to bake us a cake. The results were…interesting.]

The problem, for many scientists, is that nobody can actually see what the network is doing, due to the way these networks are made. They train a network by assigning it a task and feeding it data. As the newborn network digests more data, it adjusts itself to perform that task better. The end result is a “black box,” a tool whose innards are so scrambled that even its own creators can’t fully understand them. 

AI experts have devoted countless hours of their lives to find better ways of looking into their very creations. That’s already tough to do with a simple image-recognition network. It’s even more difficult to understand a deep neural network that’s crunching a system such as Earth’s climate, which consists of myriad moving parts.

Still, the rewards are worth the work. If scientists know how their neural network works, not only can they know more about their own tools, they can think about how to adapt those tools for other uses. They could make weather-forecasting models, for instance, that work better in a world with more carbon dioxide in the air.

So, Hassanzadeh and his colleagues had the idea to apply Fourier analysis—a method that’s fit neatly in the toolboxes of physicists and mathematicians for decades—to their AI. Think of Fourier analysis as an act of translation. The end language represents a dataset as the sum of smaller functions. You can then apply certain filters to blot out parts of that sum, allowing you to see the patterns 

As it happened, their attempt was a success. Hassanzadeh and his colleagues discovered that what their neural network was doing, in essence, was a combination of the same filters that many scientists would use.

“This better connects the inner workings of a neural network with things that physicists and applied mathematicians have been doing for the past few decades,” says Hassanzadeh.

If he and his colleagues are correct about the work they’ve just published, then it means that they’ve opened—slightly—something that might seem like magic with a crowbar fashioned from math that scientists have been doing for more than a century.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.