Twitter’s new policy aims to protect private individuals in photos

The change is focused on curbing abuse through media shared without consent.

Joshua Hoehne / Unsplash

In response to “growing concerns” about how media is being shared on Twitter, the company said Tuesday that it was expanding its privacy protections to include images and videos of people posted without their consent. This means that Twitter will now consider removing media featuring private individuals, defined as people who do not have a significant public identity or role. Users who post such content can now face repercussions, including having the tweet in question hidden or limited, or being required to remove it before they can tweet again. The change, Twitter explained in a blog post, acknowledges that images do not have to be explicitly abusive to cause distress when shared on its platform. 

“Sharing personal media, such as images or videos, can potentially violate a person’s privacy, and may lead to emotional or physical harm,” the Twitter Safety team wrote in the blog. “The misuse of private media can affect everyone, but can have a disproportionate effect on women, activists, dissidents, and members of minority communities.” 

[Related: Twitter’s fledgling misinformation tool is adding aliases]

The policy update, which is already in effect, allows people to report media shared without consent on the platform as they would any other tweet that violates Twitter Rules. The claim is then reviewed by Twitter employees who decide whether or not to remove the media from the site and prescribe consequences for the account holder who tweeted it. 

While Twitter’s privacy policy broadly says it does not allow “media of private individuals without the permission of the person(s) depicted,” there are a number of caveats to that rule. For example, the blog points to situations in which a private individual’s likeness might be shared due to their involvement in a newsworthy event. In cases like that, Twitter says it will take into consideration the context and availability of the media on other sources (like television news) when determining whether to remove it. When it comes to public figures, like celebrities or politicians, the policy does not apply, except if it includes private nude images or if the purpose of the media is to “harass, intimidate, or use fear to silence them.” 

This move is one in a series of measures Twitter has taken this year to address abuse and misinformation on its platform as it faced inquiries from members of Congress and outside experts over its role in perpetuating online harm. It also comes after news of a significant shake-up in Twitter leadership, with co-founder and CEO Jack Dorsey sharing Monday that he is stepping down and that Parag Agrawal, previously Twitter’s chief technology officer, will be taking over as CEO.

In a wide-ranging interview with MIT Tech Review last year, Agrawal hinted at the priorities he has for the platform, including reiterating Twitter’s priority to promote “healthy conversations.” According to Agrawal, that includes “trying to avoid specific harm that misleading information can cause.” When asked about how to balance free speech with the need for moderation, Agrawal said Twitter’s role is “not to be bound by the first amendment,” but to structure the site in a way that “lead[s] to a healthier public conversation.”