CAPTCHA is Dead, But the AI Winter Lives On
SHARE
httpswww.popsci.comsitespopsci.comfilesimport2013IBM20-20Citi2070120Electronic20Brain_0.JPG
The 701 Electronic Data Processing Machine entered service in 1954, performing cost-benefit analysis. The precursor to IBM’s Watson was referred to as an “electronic brain,” decades before the AI winter rolled in. IBM

If you developed an artificial intelligence that could see as a human does, how would you announce its presence to the world?

Would you wait until it could recognize objects in a home, identifying a chair as a piece of movable furniture, to be potentially nudged back under the dining room table, and not simply another laser-mapped obstacle to be veered around? Or demonstrate its powers of perception while studying X-ray films or MRI results, sifting through complex visual data to arrive at possible diagnoses?

Or would you set that machine intelligence loose on a spam-filtering protocol, and proudly proclaim that one of the most widespread online security features has been roundly defeated?

If you’re Vicarious, a San Francisco-based startup hoping to eventually monetize its artificial intelligence (AI) work, you go for the latter—the company announced yesterday that its system could beat CAPTCHA more than 90 percent of the time, recognizing letters and numbers that are visually contorted in the hopes of preventing the automated creation of bogus user accounts, and the carpet-bombing of message boards and online comment sections with generous offers of penis enlargement.

And yet, why? Does Vicarious’ unnamed AI feel an intraspecies pity for the poor spambots battering themselves against CAPTCHA’s walls? Or does Vicarious simply think that the best way to get the internet’s attention is by playing the rogue, showing how its AI can swan right past a protocol that other systems have to shoulder their way through? CAPTCHA (short for, brace yourself, Completely Automated Public Turing Test to Tell Computers and Humans Apart) is already defeated by degrees on a regular basis, and updated to stay ahead of rampant spammer innovation. Check out Vicarious, though, striding right up to the register with its illicit beer, flashing the best fake ID in town.

Here’s how Vicarious co-founder D. Scott Phoenix explained the CAPTCHA stunt, in yesterday’s press release:

Instead of supercomputing through thousands of stored examples of a given letter in order to pick it out from an intentionally cluttered and distorted sequence, Vicarious’ system simply needs a little tutoring. As Phoenix told Popular Science, 10 examples of each letter is all the algorithms require.

In the wake of this stunt—and it’s clearly a stunt, designed to attract that plugged-in portion of the populace that knows or cares about CAPTCHA—Vicarious will slip back into obscurity. Thankfully, the company didn’t release its code-breaking algorithms. But, less fortunately, it stipulated in a media-oriented FAQ that it will be a few years before any additional news is released.

If Vicarious proved anything, it’s that the AI winter is real, and still raging, stronger than ever.

* * *

In some circles, “AI” and “winter” are fighting words. The term dates back to the 80’s, but the situation it describes is older: the freeze in funding and interest that followed the limitless promises of AI researchers in the late ‘60s. Those were heady days, when computer scientists believed that full, human-like intelligence would be recreated at any moment. Computers were beginning to show their almost inconceivable powers of data processing, and what was the brain, and its passenger, the intellect, but a biological engine for churning and sorting data?

Wikipedia’s AI Winter entry summarizes the subsequent comedown extremely well, identifying two distinct periods—1974 to 1980 and 1987 to 1993—as well as numerous isolated events, during which investment in and federal backing of AI research fizzled. In a general sense, though, some believe that this literal winter of discontent is a perpetual state. The Turing test, proposed in 1950 as a measure of a program’s ability to mimic human thought, by tricking a judge into believing he or she is interacting with a person, has yet to be “passed.” While AI is supposedly everywhere, in self-driving robotic toys and stock-picking algorithms and cover-diving video game characters, the term has been endlessly reinterpreted and defanged, pressed into service as a buzz word for nearly any automated process.

Those with skin in the game, researchers-turned-entrepreneurs like author and speaker Ray Kurzweil and Rethink Robotics founder Rodney Brooks, have long held that the winter is over. Since the basic tenets and functionality of AI are ubiquitous, with automation sunk almost invisibly into the search engines and automotive safety systems and other technologies that define life in the developed world, any talk of a stubborn winter, they argue, is a failure of imagination.

What’s more distressing, though? That the AI winter will stretch on, with no relief in sight, until some discrete system comes along that thinks like a full-fledged person? Or that the winter is over, because AI is all around us, and it’s far more boring than its original architects could have ever imagined?

The announcement from Vicarious is a textbook example of managed expectations in AI research. It can do one thing—but only one—about as well as a human being. So its breakthrough has to be painstakingly framed, within the context of a field that’s more hobbled than a lay reader might assume. Sorting through a string of mildly obscured letters, a task worthy of a grade schooler, is a big deal . . . because AI has problems doing simple things. Not surprisingly, the bright, limitless future of Vicarious’ system is deferred.

But before we watch Vicarious ride off into the cruel sunset of the AI winter, consider the shot that the company took at the most exciting artificial intelligence project in years, and the closest to a thinking machine in any current lab. Again, from the company’s press materials:

Forget AI Winter—those, right there, are fighting words. Though Watson was also unveiled with an unabashed PR stunt—its quiz show evisceration of two human Jeopardy champions in 2011—the system has since graduated to deployments with world-tilting implications. Less interesting is IBM pushing Watson as a call-center rep, using its natural language processing chops and rapid access to a teraflop of internal data and millions more online files and records to interact with customers. But Watson is also playing Sherlock, diving into medical records, imaging results, millions of pages of journals, and other data to provide possible diagnoses and treatment options to doctors. The system is being deployed in this capacity at multiple medical centers, including the MD Andersen Cancer Center at the University of Texas, as part of its “moon shot” program to cure leukemia outright. Watson will serve as a diagnostic assistant, grinding through scattered research, patient, laboratory and other databases to find meaningful connections, and even draw conclusions regarding treatment and candidates for clinical trials.

That is what artificial intelligence, or something close to it, looks and sounds like. Though a single high-performing system—one that’s in the wild, not simply teased via press release, or chatting up doctoral candidates in a lab—isn’t sufficient proof that the AI winter has receded. But Watson could be the nervy little groundhog we’ve been waiting for.