Reuters built a bot that can identify real news on Twitter
Who says AI can’t spot fake news?
The Bin-Laden raid, the Boston Marathon bombing, Scully’s life-saving landing on the Hudson. News often hits Twitter well before the mainstream media has a chance to catch up. In fact, according to Reuters’ internal research, about 20 percent of all news breaks on Twitter first. But, as this year’s election cycle proves, fake news can break (and spread) just as quickly as the real news can.
Reuters’ answer to this problem is a new system called News Tracer, an algorithm that weeds through every tweet (all 500 million of them that go up each day) to sort real news from spam, nonsense, ads, and noise. This way, reporters can get out of the social-media weeds, and spend more time digging deeper into stories.
In development since 2014, reports the Columbia Journalism Review, News Tracer’s work starts by identifying clusters of tweets that are topically similar. Politics goes with politics; sports with sports; and so on. The system then uses language-processing to produce a coherent summary of each cluster.
What differentiates News Tracer from other popular monitoring tools, is that it was built to think like a reporter. “The interesting exercise when you start moving to machines is you have to start codifying this,” Reg Chua, Reuters’ executive editor for data and innovation told the Journalism Review.
That virtual mindset takes 40 factors into account, according to Harvard’s NiemanLab. It uses information like the location and status of the original poster (e.g. is she verified?) and how the news is spreading to establish a “credibility” rating for the news item in question. The system also does a kind of cross-check against sources that reporters have identified as reliable, and uses that initial network to identify other potentially reliable sources. News Tracer can also tell the difference between a trending hashtag and real news.
The mix of data points News Tracer takes into account means it works best with actual, physical events—crashes, protests, bombings—as opposed to the he-said-she-said that can dominate news cycles.
Still, we’re at an interesting moment regarding how artificial intelligence can—or should—be used in news. Today, Facebook pushed out a major PR/educational campaign about its AI efforts, seeking to demystify abstract concepts like machine learning and neural networks. These tools underpin what does, or doesn’t wind up in your Timeline, and will continue to shape how you experience Facebook in the future.
By monitoring, learning from, and catering to a user’s clicking, reading, and liking habits, the social-media giant has opened itself up to a flurry of criticism. By catering to what a user wants to see, the system also pushes down things they perhaps should see. Critics call this phenomenon the “filter bubble.”
Among the most acute repercussions of these bubbles is the propagation of fake news on the site. Bogus headlines (Pope Francis endorses Trump, Bill Clinton raped a 13-year-old girl) reverberated through the echo chamber that is the filter bubble.
Facebook’s powerful AI could probably work to filter out these fake headlines, AI research lead Yann LeCun said at a recent press event. But, CEO Mark Zuckerberg has some qualms about the idea, noting in a mid-November post that “identifying the ‘truth’ is complicated” and even mainstream sources don’t get 100 percent of the facts right 100 percent of the time.
In a follow-up post the next week, Zuckerberg laid out a seven-point plan for identifying obvious hoaxes. Projects in the plan include tools that assess the quality of related articles on a given news item, require third-party verification, and easy reporting mechanisms for users.
Similarly, French newspaper Le Monde has announced plans to help readers spot fake news throughout the web. The paper’s plan is to develop an extension for the Chrome and Firefox browsers. Initially, it will work of a database of trusted sources, and raise a red flag when a reader stumbles upon suspect content. Intermediate sources will get yellow flags, and trusted ones will go green.
Of course, Facebook and Reuters do not share a common mission. The former is a social-network-come-news-aggregator, while the latter is a news producer. At their core, however, they both share the responsibility of informing the public, be that by conception or evolution. The work that Reuters has done on News Tracer over the past two years helps proves that artificial intelligence, when developed and implemented smartly, can raise the standard of the news we consume.