Twitter’s fact-checking service Birdwatch is headed for your feed

Only time will tell if contributor comments are as helpful as they purport to be.
Man hand holding iPhone 12 Pro Max Gold with social networking service Twitter on the screen.
Misinformation runs rampant online, but do services like Birdwatch help? DepositPhotos

On the internet, you can find just about anything—and quite a bit of it is misinformation. Online-derived falsities can range from silly to outright dangerous, and all of it can capture the minds of lots and lots of people if it catches wind on social media. 

While social media giants have taken different approaches, with varying success levels, one unique way that Twitter is using its own users to fish out less-than-true facts from their feeds is a program called Birdwatch. And as of this morning, the fact-checking notes that Birdwatch contributors place on questionable statements will now be visible to Twitter users across the entire country.

[Related: Whistleblower tells Congress that Twitter has a spy problem.]

The service, which expanded last month with hopes of bringing on 1,000 more contributors a month, is more or less a peer-to-peer fact checking service. It is kind of like sharing notes on a Google document with your classmates. You may have written something down incorrectly, but if you’re lucky, one of your peers may be able to add a suggested correction and context to a comment that isn’t quite accurate. 

But, with millions and millions of users, having just anybody throw their thoughts in the ring isn’t always the best way to go. Birdwatch contributors go through a vetting process that helps determine how helpful their comments are. A “rating impact” score supposedly makes sure that the fact checkers let into the fold continue to do a good job at the risk of having their Birdwatching abilities revoked.

This is a feature of the “bridging algorithm” Twitter integrated into the program that finds consensus among multiple groups for content that is highlighted, versus just running it like a popularity contest based on number of upvotes. “This is a novel approach. We’re not aware of other areas where this has been done before,” Twitter Product VP Keith Coleman tells TechCrunch. In testing, apparently people are 20-40 percent less likely to agree with a “misleading” post after viewing Birdwatch notes compared to those who just saw the tweet. 

[Related: Twitter’s fact-checking program might be headed to your feed.]

But, all of this does come with concerns—research from nonprofit media institute Poynter found that the most “prolific” Birdwatcher user’s notes are more likely to mark tweets critical of conservative politicians as “misleading” while marking similar tweets critical of left-wing politicians as “not misleading.” Additionally, less than half of Birdwatch comments include a source, according to the Poynter research. As recently as last month, the community allowed a QAnon account into the project.

This all follows news that buyer Elon Musk is back in his attempts to purchase Twitter. What that means for Birdwatch, and for Twitter in general, is for now, up in the air.