SHARE

The social media landscape looks a lot different today than it did just one week ago. After the deadly riot at the U.S. Capitol building on January 6, most of the major social media companies took action against President Donald Trump and some of his supporters. Those moves have had a ripple effect that reaches the far corners of the internet and will likely continue for years to come. Here’s an updating overview of what’s happening—and how we got here.

What kickstarted all of this?

On the night of Wednesday, January 6, Twitter suspended Trump’s account for 12 hours in response to several tweets about the day’s deadly events. The temporary action claimed that “Any future violations of the Twitter Rules, including our Civic Integrity or Violent Threats policies, will result in permanent suspension” of his account. The next day, Trump reemerged on the site for two more tweets, at which point Twitter dished out the permanent suspension.

On the 7th, Mark Zuckerberg issued a statement on his Facebook page issuing a ban on Trump for the remainder of his presidency. From there, other services fell in sequence, leaving the president—and some of his most vocal supporters—without a direct presence on the largest social media platforms. Those platforms now include YouTube, which deleted a video and prevented Trump from uploading content until next week and game streaming site, Twitch, which permanently disabled his account.

What gives these platforms the right to ban the president?

The rules used to ban Trump aren’t new. In fact, some claim he’s been running afoul of well-established terms of service for the entirety of his term as president. In the case of Twitter’s ban, the company specifically cites two tweets made after Trump came back from his 12-hour time-out, including one about how he wouldn’t be attending Joe Biden’s inauguration on January 20th.

According to Twitter, the company believed those messages ran afoul of its Glorification of Violence policy. The blog post outlines a number of factors that played into the ban, including the possibility that his tweet about skipping the inauguration could assure bad actors that they could target the event without concern that they would hurt Trump. Beyond that, Twitter also claims people were using the platform to plan “armed protests” around the country on January 17th.

From Facebook (and the Facebook-owned Instagram), Zuckerberg similarly suggests that the sitting president’s reaction to the riots was unproductive and possibly dangerous. “His decision to use his platform to condone rather than condemn the actions of his supporters at the Capitol building has rightly disturbed people in the US and around the world,” Zuckerberg wrote in his public post. “We removed these statements yesterday because we judged that their effect—and likely their intent—would be to provoke further violence.”

In both cases, the statements reference long-standing pieces of the terms of service and user agreements, which allow the companies to use their judgment when it comes to moderating content and suspending users.

What is “Section 230” and what does it have to do with all of this?

In super-simplistic terms, Section 230 of the Communication Decency Act gives websites the ability to moderate objectionable, illegal, or dangerous user-generated content without having to face legal liability for that content. (Here’s a much more in-depth explanation from The Verge.)

So, if you comment on this article and say mean things about me and those thoughts don’t contribute to the conversation, then we’re allowed to delete it. If you say something truly vile and dangerous in the comments, then this site isn’t liable for it under reasonable conditions. This isn’t specific to social media platforms—it applies across the internet.

The statute has come under heavy political criticism, particularly after Twitter began labeling Donald Trump’s tweets about election fraud as incomplete or possibly misleading. Politicians and pundits called for Section 230′s revocation—it even appeared in Mitch McConnell’s successful effort to block the $2,000 payments as part of the coronavirus relief package.

It’s not just one political party that hopes to reform—if not fully revoke—Section 230. Some of the most vocal critics of the statute before last year came from the Democratic side of the aisle, which believed that the social media platforms should have more liability when it comes to harmful content on their sites as an incentive to more proactively police practices like hate speech.

Now, with Democrats in control of Congress and the presidency, it’s unclear what that means for Section 230 going forward.

Can’t Trump just make other accounts?

If Trump wanted to get into a game of whack-a-mole with the various platforms, he could. In fact, soon after his ban, he tried to use the official presidential Twitter account to continue sending out his messages. That runs afoul of a policy against ban evasion and the tweet was quickly deleted. In the case of a permanent suspension, “Twitter reserves the right to also permanently suspend any other account we believe the same account holder or entity may be operating in violation of our earlier suspension, regardless of when the other account was created.”

Facebook says that Trump may be able to have his account back once the transfer of power has happened in an orderly fashion. Making a new page would be difficult for Trump because it’s used for promoting political messages and advertisements, for which Facebook requires valid identification.

Can’t Trump just go to another social media service, like Parler?

Parler promised of a truly open social media platform with an emphasis on free speech that would embody the “digital town square” where people could say whatever they want to a large extent. That also means moderation practices that are much more relaxed than those found on Twitter and Facebook. Over the weekend, both Apple and Google removed the app from their respective app stores, which prevented new users from downloading the app. Then, at 11:59 p.m. on Sunday night, Amazon Web Services revoked Parler’s hosting, which meant even users with the app or those willing to use a browser were out of luck for reaching and using the service.

According to Amazon, Parler was banned because of inadequate content moderation practices, which it feared would allow users to spread misinformation and plan violent or destructive events. Parler has filed a lawsuit, and Amazon has responded with a rather scathing document.

At present, just about all of Parler’s vendors have dropped the company from their services, which leaves the company in a difficult position. In an interview with Glenn Beck on Monday morning, a Parler representative claimed that there are several vendors looking to partner with Parler for various web services, but couldn’t confirm if or when the service will actually come back online. Reports claim that Parler will soon move to a domain registrar called Epik, which also hosts controversial services like Gab.

So, not only can Trump not use Parler—right now, no one can.

Will that happen to other services?

A number of tech companies have revoked their support for Trump in other ways that are much less visible than the social media bannings. Stripe, for instance, will no longer process payments made to the Trump campaign through its official website.

As for other social media apps, their relationship with Apple and Google’s app stores has gotten more complicated. As Slate points out, platforms like YouTube and Facebook have provided groups with space to plan violent events and radicalize people like the Christchurch shooter, yet they haven’t been removed from the app stores.

Even the American Civil Liberties Union has expressed concern about the future of this kind of discourse. In a statement to Bloomberg, a representative from the ACLU said, “It should concern everyone when companies like Facebook and Twitter wield the unchecked power to remove people from platforms that have become indispensable for the speech of billions—especially when political realities make those decisions easier.”

What happens now?

It seems safe to say that no one has any real definitive idea. Rumors suggest that Trump may use his last bit of time in office to try and punish the big tech platforms on his way out the door.

The big tech CEOs have made several appearances in front of Congress in the past year to talk about their moderation practices, discuss antitrust accusations, and serve as a backdrop for politicians to create sound bites of themselves grandstanding. These events will certainly only amplify that governmental pressure across the board.