Twitter is launching what it calls “a community-based approach to misinformation.”
The goal, as explained in a blog post by Twitter’s Vice President of Product Keith Coleman, is to expand beyond the labels that the company already applies to controversial or potentially misleading tweets, which he suggested are limited to “circumstances where something breaks our rules or receives widespread public attention.”
Coleman wrote that the Birdwatch approach will “broaden the range of voices that are part of tackling this problem.” That brings a broader range of perspectives to these issues and goes beyond the simple question of, “Is this tweet true or not?” It may also take some of the heat off Twitter for individual content moderation decisions.
Users can sign up on the Birdwatch site to flag tweets that they find misleading, add context via notes and rate the notes written by other contributors based on whether they’re helpful or not. These notes will only be visible on the Birdwatch site for now, but it sounds like the company’s goal is to incorporate them to the main Twitter experience.
“We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable,” Coleman said. “Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.”
Given the potential for plenty of argument and back-and-froth on contentious tweets, it remains to be seen how Twitter will present these notes in a way that isn’t confusing or overwhelming, or how it can avoid weighing in on some of these arguments. The company said Birdwatch will use rank content based on algorithmic “reputation and consensus systems,” with the code shared publicly. (All notes contributed to Birdwatch will also be available for download.) You can read more about the initial ranking system here.
“We know there are a number of challenges toward building a community-driven system like this — from making it resistant to manipulation attempts to ensuring it isn’t dominated by a simple majority or biased based on its distribution of contributors,” Coleman said. “We’ll be focused on these things throughout the pilot.”