Case for the latter: Compare the first few paragraphs from two news stories about a sports game, re-published in a recent study.
Can you tell which was written by an algorithm? It’s the first, while the second comes from a real human being at the Los Angeles Times. If you couldn’t make out the difference, don’t worry–other people couldn’t, either.
Start-up companies like Narrative Science have been using algorithms to produce short, simple news articles for a fair amount of time now. But we don’t have too much science on how readers feel about those articles. Researcher Christer Clerwall of Karlstad University in Sweden had a group read both of those articles, and then surveyed them on how they felt: Which seemed more objective? Easier to read?
Here’s how the results looked:
You’ll notice the ratings are fairly close, and the study notes that, too; the only statistically significant field was “pleasant to read,” where the journalist article won handily. (Take that, machines.) But the fact that the results weren’t significant is, in itself, possibly significant; the people surveyed didn’t seem to care which article they read. This was backed up when Clerwall had the participants guess which article was written by a person, and which by a machine. “Of the 27 respondents who read the software-generated text, 10 thought a journalist wrote it and 17 thought it was software-generated. For the 18 respondents in the ‘journalist group,’ 8 perceived it as having been written by a journalist, but 10 thought software wrote it,” he writes in the study.
But if you’re hoping to invest in journalism-robots, caveat emptor: This was a tiny sample; you’d need a lot more research to prove machines could outperform–or even match–journalists. (Or, well, vice versa.) Although maybe this article wouldn’t score high on the “objectivity” portion of that scale.
You can read the full study online here.