Social Media photo
SHARE

Any fact-checker who works in the media has a straightforward but challenging job: make sure all the claims in an article are true. Are simple facts, like the distance between two cities, accurate? Are the quotes correct? Are broader statements true? It’s an important task, and in an era of outright fake news—especially considering the 2016 election and the upcoming midterms—it’s becoming even more crucial.

To tackle this larger issue, researchers from MIT as well as institutions in Qatar and Bulgaria have been working on a way to use artificial intelligence to help humans make sense of the complicated media landscape. And they realized that an important step they needed to take before developing an AI that can fact-check individual claims was to analyze how reliable different news websites are themselves in the first place.

So they set out to make an AI that could evaluate how factually strong different sites are, and their political bias.

To train their AI system, they first used data from 1,066 websites listed in a source called Media Bias / Fact Check. Then, the AI analyzed information about news websites, considering sources like articles on the sites themselves, their Wikipedia pages, Twitter accounts, even URLs. Using information like this, the AI had about a 65 percent accuracy at predicting how factual the website was, and was about 70 percent accurate at detecting its bias.

One of the best resources for the AI is one that humans rely on, too. “It turns out that Wikipedia is very important,” says Ramy Baly, a postdoc at MIT’s Computer Science and Artificial Intelligence Lab and the paper’s first author. That’s because the information you need to know about a news source might be right there: The Wikipedia page for The Onion, for example, labels it as satirical right up top. The Drudge Report’s Wikipedia page labels it as conservative.

Wikipedia was important for another reason. “Not having the Wikipedia page is associated with a website not being very reliable,” Baly adds.

MIT CSAIL
MIT results show a correlation between publications with an “extreme” level of bias and a low level of factual accuracy. MIT CSAIL

Keeping in mind the overall trustworthiness of the website itself—and checking its Wikipedia page, if it has one—is a good step for regular people, too. For example, in August, Facebook and a cybersecurity firm announced they’d uncovered “inauthentic” news coming out of Iran. One of the websites associated with Iran was called the Liberty Front Press; they called themselves “independent” but appeared to actually be pro-Iran. And tellingly, the site does not appear to have a Wikipedia page. (Facebook also has some good tips for us non-AIs to keep in mind when looking for fake news.)

Of course, the MIT research group aren’t the only ones using AI to analyze language like this: a Google-made AI system called Jigsaw automatically scores the toxicity of reader comments, and Facebook has turned to AI to help augment its efforts to keep hate speech at bay in Myanmar.

Another source was even more important than Wikipedia for the MIT researchers’ AI system: articles on the websites themselves. The AI was able to analyze between 50 to 150 articles on each news site and examine the language in them. “Extremely biased websites try to appeal to the emotions of the readers,” Baly says. “They use a different kind of language,” when compared to a mainstream, down-the-middle site.

Balys says they’d still like to make their system more sophisticated. Their goal at this stage was to “initiate a new way of thinking of how to tackle this problem.”