AI photo
SHARE

Self-driving cars. Faster MRI scans, interpreted by robotic radiologists. Mind reading and x-ray vision. Artificial intelligence promises to permanently alter world. (In some ways, it already has. Just ask this AI scheduling assistant.)

Artificial intelligence can take many forms. But it’s roughly defined as a computer system capable of tackling human tasks like sensory perception and decision-making. Since its earliest days, AI has fallen prey to cycles of extreme hype—and subsequent collapse. While recent technological advances may finally put an end to this boom-and-bust pattern, cheekily termed an “AI winter,” some scientists remain convinced winter is coming again.

What is an AI winter?

Humans have been pondering the potential of artificial intelligence for thousands of years. Ancient Greeks believed, for example, that a bronze automaton named Talos protected the island of Crete from maritime adversaries. But AI only moved from the mythical realm to the real world in the last half-century, beginning with legendary computer scientist Alan Turing’s foundational 1950 essay asked and provided a framework for answering the provocative question, “Can machines think?”

At that time, the United States was in the midst of the Cold War. Congressional representatives decided to invest heavily in artificial intelligence as part of a larger security strategy. The specific emphasis in those days was on translation, specifically Russian-to-English and English-to-Russian. The years 1954 to 1966 were, according to computational linguist W. John Hutchins’ history of machine translation, “the decade of optimism,” as many prominent scientists believed breakthroughs were imminent and deep-pocketed sponsors flooded the field with grants.

But the breakthroughs didn’t come as quickly as promised. In 1966, seven scientists on the Automatic Language Processing Advisory Committee published a government-ordered report concluding that machine translation was slower, more expensive, and less accurate than human translation. Funding was abruptly cancelled and, Hutchins wrote, machine translation came “to a virtual end… for over a decade.” Things only got worse from there. In 1969, Congress mandated that the Defense Advanced Research Projects Agency, or DARPA, fund only research with a direct bearing on military efforts, putting the kibosh on numerous exploratory and basic scientific projects, including AI research, which had previously been funded by DARPA.

“During AI winter, AI research program had to disguise themselves under different names in order to continue receiving funding,” according to a history of computing from the University of Washington. (“Informatics” and “machine learning,” the paper notes, were among the euphemisms that emerged in this era.) The late 1970s saw a mild resurgence of artificial intelligence with the fleeting success of the Lisp machine, an efficient, specialized, and expensive workstation that many thought was the future of AI hardware. But hopes were dashed by the late 1980s—this time by the rise of the desktop computer and resurgent skepticism among government funding sources about AI’s potential. The second cold snap lasted into the mid-1990s and researchers have been ice-picking their way out ever since.

The last two decades have been a period of almost-unrivaled optimism about artificial intelligence. Hardware, namely high-powered microprocessors, and new techniques, specifically those under the umbrella of deep learning, have finally created artificial intelligence that wows consumers and funders alike. A neural network can learn tasks after it’s carefully trained on existing examples. To use a now-classic example, you can feed a neural net thousands of images, some labeled “cat” others labeled “no cat,” and train the machine to identify “cats” and “no cats” in pictures on its own. Related deep learning strategies also underpin emerging technology in bioinformatics and pharmacology, natural language processing in Alexa or Google Home devices, and even the mechanical eyeballs self-driving cars use to see.

Is winter coming again?

But it’s those very self-driving cars that are causing scientists to sweat the possibility of another AI winter. In 2015, Tesla founder Elon Musk said a fully-autonomous car would hit the roads in 2018. (He technically still has four months.) General Motors is betting on 2019. And Ford says buckle up for 2021. But these predictions look increasingly misguided. And, because they were made public, they may have serious consequences for the field. Couple the hype with the recent death of a pedestrian in Arizona, who was killed in March by an Uber in driverless mode, and things look increasingly frosty for applied AI.

Fears of an impending winter are hardly skin deep. Deep learning has slowed in recent years, according to critics like AI researcher Filip Piekniewski. The “vanishing gradient problem,” has shrunk, but still stops some neural nets from learning past a certain point, stymying human trainers despite their best efforts. And artificial intelligence’s struggle with “generalization,” persists: A machine trained on house cat photos can identify more house cats, but it can’t extrapolate that knowledge to, say, a prowling lion.

These hiccups pose a fundamental problem to self-driving vehicles. “If we were shooting for the early 2020s for us to be at the point where you could launch autonomous driving, you’d need to see every year, at the moment, more than a 60 percent reduction [in safety driver interventions] every year to get us down to 99.9999 percent safety,” said Andrew Moore, Carnegie Mellon University’s dean of computer science, on a recent episode of the Recode Decode podcast. “I don’t believe that things are progressing anywhere near that fast.” While some years we may reduce the need for humans by 20 percent, in other years, it’s in the single digits, potentially pushing the arrival date back by decades.

Much like actual seasonal shifts, AI winters are hard to predict. What’s more, the intensity of each event can vary widely. Excitement is necessary for emerging technologies to make inroads, but it’s clear the only way to prevent a blizzard is calculated silence—and a lot of hard work. As Facebook’s former AI director Yann LeCun told IEEE Spectrum, “AI has gone through a number of AI winters because people claimed things they couldn’t deliver.”