Robots photo
SHARE

A billion bad jokes notwithstanding, there’s nothing inherently wrong with politicians. They are liars, certainly, but that only proves that they’re human. And there’s no innate flaw in politics either. Loudly and inefficiently disagreeing with one another is a proud primate tradition, the alternative to which is voting with our fists (or more lethal options).

What’s disconcerting about politics is its naked persuasion. People are paid to write and deliver speeches whose clear intent is to steer opinion, with language that seems to shift as needed, to cater to different regions and constituencies. Journalists covering politically sensitive issues can generate just as much unease, when they make impassioned cases for a given position, or else pretend at objectivity by simply presenting the facts, while still choosing to leave out other details. It’s all persuasion, and that can be scary, because we’re wired to be persuaded. It’s the price of being social animals.

Lucky for us, machines aren’t so easily swayed. Last year, researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill published a paper that demonstrated the use of artificial intelligence to dissect campaign speeches, quantifying the way that candidates adjust their language to appeal to different types of voters. “When people run for president of the United States, there’s a conventional wisdom that says that after you, Mitt Romney, or you, Barack Obama win a primary, you’re going to shift way you present yourself to the public, and move to the center,” says Noah Smith, a computer scientist at Carnegie Mellon University.

To test this assumption, Smith and the rest of the speech analysis team first ran 112 nonfiction political books and 765 magazine issues through machine learning algorithms, identifying specific terms or “cues” most often associated with a specific political ideology. Ron Paul’s book was a “great example of libertarian language,” says Smith, though his political scientist colleague still had to carefully label specific chapters, to help the system make sense of what it was reading and extracting. Language could be tagged anywhere from far right to far left, liberatarian or religious, or all of the above. With more than 32 million cues collected, the team assigned a computer model called CLIP—short for clue-lag ideological proportions—the unpleasant task of sifting through transcriptions of speeches related to the 2008 and 2012 presidential campaigns, and determining the rate and relationship of those political cues.

“What we came away with was a pretty clear picture that, yes, in every case we tested, there’s a move from more extreme language to the center by people who win primaries,” says Smith. “This was an established fact that people believed, but that hadn’t been empirically tested.”

You can read the results for yourself, which can be hard to put into proper context, given that this is AI-based statistical analysis, where it’s not only a matter of how many times a phrase comes up, but where in the speech, and how close it is to other cues. Plus, as Smith admits, this project confirmed an already commonly-held belief, so there are no real surprises. Still, looking at what CLIP came up with, there appears be real value in turning ruthlessly objective machines loose on stump speech rhetoric.

In September 2011, at the beginning of the 2012 primary season, Mitt Romney gave a speech in Tampa, Florida that, based on CLIP’s analysis, was heavy on right-wing cues. It included, for example, various terms related to illegal immigration. This is a gross oversimplification, but the model seems to put that speech at roughly 68 percent right-skewing. A little less than a year later, was back in Tampa to accept the Republican nomination. But with a national audience watching on television, and a campaign for general election offically kicking off, any references to immigration were gone. CLIP actually assessed the speech as left-leaning (around 54 percent of the politically relevant language was aimed in that direction).

Barack Obama showed a similar change in rhetoric during the 2008 campaign, moving from language that was roughly 59 percent left-leaning leading up to the primaries, to cues that were some 53 percent right-leaning during the general election. So not only was Obama apparently pursuing the classic assumed strategy of aiming for the center, but he oversteered into red-state issues, to better appeal to conservatives. Was this intentional, or a natural response to the way the opposition was compaigning? Whatever the cause, the machine seems to have caught what the pundits missed.

* * *

The goal of CLIP’s creators wasn’t to call out specific politicians or strategies, but to prove that this kind of AI-powered analysis could be useful. “It’s a great example of something people are probably not very good at,” says Smith. “It’s very hard for you and me to listen to a speech and coldly, objectively evaluate how often they’re attempting to cue a specific constituency. We’re just too subjective and too tied into the issues. We care too much.”

CLIP’s assessment of rhetoric reveals one of the more complex benefits of natural language processing (NLP), a branch of AI research that focuses on the analysis of spoken or written text. NLP’s most famous champion is IBM’s Watson, whose ability to process millions of pages of documents made it a Jeopardy champion in 2011. That platform is now being leveraged for a wide number of text-crunching applications, the most dramatic of which involve looking at medical journals and patient records, to assist physicians in identifying on relevant treatments and clinical trials for specific cancer sufferers. NLP is also the AI backbone of Apple’s Siri, a presumably powerful speech recognition system that’s still far more useful in iPhone commercials than in real life.

But what Watson and Siri are offering is what NLP has for years, and what computers have for decades—the ability to quickly process lots of data. CLIP takes NLP into headier territory, however, by crunching data that we likely wouldn’t trust a human to review, without injecting his or her own bias into the results. No matter how ill-informed a person might claim to be about politics, specific politicians tend to evoke visceral reactions. “The computer has no skin in the game at all,” says Smith. “It has no position on how Mitt Romney should sound.”

As either a political tool, or really a kind of anti-political tool, that lays bare some of the slippery tactics of speechwriters and campaign managers, NLP seems relatively powerful. But in the larger context of AI, a system like CLIP disputes the impression that NLP is at its most fascinating when it mimics the way humans think. CLIP works because it has no opinions about Mitt Romney’s religious background or Barack Obama’s 2008 campaign trail promises. CLIP, and similar AI applications, are effective because they can handle human language, while remaining inhuman and dispassionate.

CLIP’s role in upcoming elections is unknown, and probably irrelevant. Smith and his colleagues are academics, not political operators or startup entrepreneurs. Since their methodology and data is publicly available, parties interested in adapting it can just as easily create their own models. But Smith’s follow-up project could have even wider implications for the use of AI in politics. Working in collaboration with UNC Chapel Hill once again, as well as researchers at UC Davis and the University of Maryland, Smith is exploring the issue of framing in media coverage of political issues.

Framing, in a nutshell, is about choices. It refers to how an issue is presented, meaning what sort of language and references are included and emphasized, as well as what content is missing. Does a story that addresses immigration, for example, discuss stranded children and families riven by deportation, or is it concerned with security and enforcement issues?

The existing research seems to indicate that framing is real, and sometimes counterintuitive. A 1993 study found that people who read stories about poverty that focus on national unemployment are more likely to feel that large institutions are to blame. Meanwhile, coverage that details individuals dealing with poverty are more likely to lead readers to blame financial hardship on personal decisions. Human interest stories, in other words, can backfire, evoking the opposite of sympathy. Stranger still, some have hypothesized that frames are contagious. One outlet’s coverage choices might spread to others, and the wider policy debate surrounding a given issue may leap across state lines, with specific frames intact. Our national debates might be defined as much by linguistic choices as they are by fact. “Framing seems to really matter. So it seems like the kind of thing that’s really worth tracking,” says Smith.

With funding from the National Science Foundation, Smith and his team are currently creating what they call a Policy Frames Codebook, focusing on frames related to same-sex marriage, smoking, and immigration. So far, they’ve collected framing data from roughly 9500 articles published between 1990 and 2012, a painstaking, fully human process that the team believes could be automated in the future. Two years into this scheduled three year project, it’s too early to draw many conclusions. The only certainty is that this is just the beginning. The more advanced and accessible AI becomes, the more aware politicians, journalists and other professional persuaders will have to be about what they say, and how they say it.

I, for one, welcome our robotic over-analyzers.