Twitter’s fledgling misinformation tool is adding aliases
Birdwatch, Twitter's crowdsourced misinformation-fighting pilot program, is adding a new feature to help decrease bias and polarization.
Twitter announced on Monday that its crowdsourced misinformation-fighting tool, Birdwatch, is introducing aliases for its contributors. Contributors will have one chance to pick from five randomized alias options (as opposed to using their actual usernames), which will be available through the Birdwatch link.
“We want everyone to feel comfortable contributing to Birdwatch,” the Birdwatch account tweeted, “and aliases let you write and rate notes without sharing your Twitter username.”
If you haven’t heard of Birdwatch before, here’s what you missed: Twitter started the program around the beginning of this year, with the goal of allowing users to collaboratively fact-check tweets and combat misinformation by flagging misleading posts and providing notes to give more context.
In a thread, the Birdwatch account noted that the majority of active and prospective participants (especially those who identify as female and Black) preferred to use an alias. Birdwatch also linked to research that showed that aliases could potentially help decrease bias and polarization, by shifting attention away from the author and towards the content of a note.
[Related: Polite warnings are surprisingly good at reducing hate speech on social media]
“To ensure this change doesn’t come at the expense of accountability, we’re also rolling out Birdwatch profile pages that make it easy to see one’s past contributions,” Birdwatch added.
Back when the Birdwatch pilot program kicked off in January, it only included approximately 1,000 US users, The Verge reported. But the program intends to continue enrolling new applicants on a rolling basis, prioritizing accounts that “follow and engage with different audiences and content than those of existing participants.”
Throughout the first phase of the pilot, users can only view notes on the Birdwatch site, where other participants can rate how helpful various contributor notes are. (This will have no impact on the Twitter experiences of people outside of the pilot.)
“Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors,” Keith Coleman, Twitter’s VP of product, wrote in a blog post earlier this year. Additionally, Twitter wants to see how Birdwatch could complement their existing efforts to combat misinformation.
On the site, the highest-rated notes are tagged onto their respective tweets. Birdwatch also fields feedback from the public on ways to make the system better.
[Related: Twitter’s efforts to tackle misleading tweets just made them thrive elsewhere]
Twitter, along with other big tech platforms like Facebook and Google, has been forced to take a long, hard look at their practices after Congress questioned how the platforms protected users from harmful information. Twitter hasn’t been sitting idle, and misinformation is a complex issue to tackle. The social network has tried hard interventions like banning people, and soft interventions like marking tweets with warnings and preventing them from being shared; it even brought in a feature that pre-reviewed tweets for hateful content. But as researchers found, each strategy could have its own set of drawbacks.
“There are a number of challenges toward building a community-driven system like this — from making it resistant to manipulation attempts to ensuring it isn’t dominated by a simple majority or biased based on its distribution of contributors,” Coleman noted in the blog. “We know this might be messy and have problems at times, but we believe this is a model worth trying.”