You Probably Don't Care If You Get Your News From A Robot

Not that we're hurt, or anything.

Nikolas Becker/Wikimedia Commons

What is The Future of Journalism? Data reporting? Drones? Computer-generated news, like the algorithms are already breaking news on today's California earthquake?

Case for the latter: Compare the first few paragraphs from two news stories about a sports game, re-published in a recent study.

Even with an unexceptional outing for Philip Rivers, the Chargers handled the Chiefs, 37-20, at Arrowhead Stadium. Rivers found the end zone for two touchdowns against the Chiefs on 18 of 23 passing for 209 yards and one pick. Matt Cassel went 24 of 42 with 251 yards passing, two touchdowns and three picks for the Chiefs. Jackie Battle carried the ball 15 times for 39 yards with a touchdown in addition to four receptions for 42 yards and another touchdown. Antonio Gates caught three passes for 59 yards.

Versus:

Matt Cassel, Russell Wilson and Mark Sanchez have struggled, and their starting jobs are in jeopardy. Their passes might sail high, but three NFL quarterbacks have landed far short of expectations. Kansas City's Matt Cassel, Seattle's Russell Wilson, and the New York Jets’ Mark Sanchez aren't the only starting quarterbacks who are struggling—there are several—but they're the ones inching ever closer to the bench.

Can you tell which was written by an algorithm? It's the first, while the second comes from a real human being at the Los Angeles Times. If you couldn't make out the difference, don't worry--other people couldn't, either.

Start-up companies like Narrative Science have been using algorithms to produce short, simple news articles for a fair amount of time now. But we don't have too much science on how readers feel about those articles. Researcher Christer Clerwall of Karlstad University in Sweden had a group read both of those articles, and then surveyed them on how they felt: Which seemed more objective? Easier to read?

Here's how the results looked:

Christer Clerwall

You'll notice the ratings are fairly close, and the study notes that, too; the only statistically significant field was "pleasant to read," where the journalist article won handily. (Take that, machines.) But the fact that the results weren't significant is, in itself, possibly significant; the people surveyed didn't seem to care which article they read. This was backed up when Clerwall had the participants guess which article was written by a person, and which by a machine. "Of the 27 respondents who read the software-generated text, 10 thought a journalist wrote it and 17 thought it was software-generated. For the 18 respondents in the 'journalist group,' 8 perceived it as having been written by a journalist, but 10 thought software wrote it," he writes in the study.

But if you're hoping to invest in journalism-robots, caveat emptor: This was a tiny sample; you'd need a lot more research to prove machines could outperform--or even match--journalists. (Or, well, vice versa.) Although maybe this article wouldn't score high on the "objectivity" portion of that scale.

You can read the full study online here.