You may have noticed: It’s a weird time for facts. On one hand, despite the hand-wringing over our post-truth world, facts do still exist. On the other, it’s getting really hard to dredge them from the sewers of misinformation, propaganda, and fake news.1 Whether it’s virus-laden painkillers, 3 million illegal votes cast in the 2016 presidential election, or a new children’s toy called My First Vape, phony dispatches are clogging the internet.
Fact-checkers and journalists try their best to surface facts, but there are just too many lies and too few of us. How often the average citizen falls for fake news is unclear. But there are plenty of opportunities for exposure. The Pew Research Center reported last year that more than two-thirds of American adults get news on social media, where misinformation abounds. We also seek it out. In December, political scientists from Princeton University, Dartmouth College, and the University of Exeter reported that 1 in 4 Americans visited a fake news site—mostly by clicking to them through Facebook—around the 2016 election.
As partisans, pundits, and even governments weaponize information to exploit our regional, gender, and ethnic differences, big tech companies like Facebook, Google, and Twitter are under pressure to push back. Startups and large firms have launched attempts to deploy algorithms and artificial intelligence to fact-check digital news. Build smart software, the thinking goes, and truth has a shot. “In the old days, there was a news media that filtered out the inaccurate and crazy stuff,” says Bill Adair, a journalism professor at Duke University who directs one such effort, the Duke Tech & Check Cooperative. “But now there is no filter. Consumers need new tools to be able to figure out what’s accurate and what’s not.”
With $1.2 million in funding, including $200,000 from the Facebook Journalism Project, the co-op is supporting the development of virtual fact-checking tools. So far, these include ClaimBuster, which scans digital news stories or speech transcripts and checks them against a database of known facts; a talking-point tracker, which flags politicians’ and pundits’ claims; and Truth Goggles, which makes credible information more palatable to biased readers. Many other groups are trying to build similar tools.
As a journalist and fact-checker, I wish the algorithms the best. We sure could use the help. But I’m skeptical. Not because I’m afraid the robots are after my job, but because I know what they’re up against. I wrote the book on fact-checking (no, really, it’s called The Chicago Guide to Fact-Checking2 ). I also host the podcast Methods, which explores how journalists, scientists, and other professional truth-finders know what they know. From these experiences, I can tell you that truth is complex and squishy. Human brains can recognize context and nuance, which are both key in verifying information. We can spot sarcasm. We know irony. We understand that syntax can shift even while the basic message remains. And sometimes we still get it wrong.3 Can machines even come close?