A guide to Section 230, the law that made the internet the Wild West

The law from 1996 is at the heart of a pair of important Supreme Court cases. Here's a brief explainer.
Section 230 essentially holds that a social media platform isn't liable for the content people post there. Deposit Photos

Share

There are few laws more fundamental to the way the internet works than Section 230. Just 26 words long, it created the framework for much of the modern web. But now the Supreme Court has taken up two cases that challenge its basic premise: Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh. If you want to know what all the hubbub is about, here’s what the law says, and what people think about it.

What is Section 230? 

Section 230 of the Communications Decency Act was initially passed in 1996. That’s before Google, Facebook, Amazon, or many of today’s internet giants were founded. Instead, it was designed to deal with an internet filled with message boards and rudimentary search engines. 

Section 230 has two key provisions: (c)(1), which states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider;” and (c)(2), which provides protection from liability for “any action voluntarily taken in good faith to restrict access to or availability of” objectionable content, whether or not that content is constitutionally protected by the First Amendment. 

In general, the courts in the US have taken a broad approach to interpreting Section 230. They have largely ruled that search engines, large social media services, e-commerce sites, and even small blogs that host comment sections aren’t liable for content that users post—though there are exceptions for things like illegal content and content that violates intellectual property law. The courts have also ruled that platforms have broad rights to remove whatever content they like, which is how former President Donald Trump got himself banned

This law has very important ramifications for how websites have been able to operate over the past 26 years. Sophia Cope, a senior staff attorney at the Electronic Frontier Foundation, explains that “Section 230 is considered both an immunity from suit as well as from liability.” (The EFF has filed amicus briefs in support of Section 230 in both recent Supreme Court cases, and has long argued that it is an essential law for maintaining free speech rights on the internet.)

The law means that not only are websites and social networks off the hook from any potential civil settlements for any harm that comes to a plaintiff from user-generated content these platforms host, but they can get out of any lawsuit early without having to defend against the specifics of the claim. 

As Cope explains, without Section 230, “Platforms would have to defend themselves all the way to the very end of a case that might take several years… and then there could be multiple appeals that cost a lot of money and take a lot of time.” 

Why do tech companies like Section 230? 

Section 230 is often described as a “liability shield,” and really, that’s why tech companies like it. 

In other countries around the world, tech companies have far stricter obligations to remove content than they presently do in the US. In Germany, for example, social media companies have to promptly remove illegal content (that can include crimes such as insulting a public office) or face up to a €50 million (roughly $53 million) fine. 

And not only are they forced to pay fines, but they’re forced to employ lawyers and lobbyists to argue against the cases and the laws in the first place. It’s why they have fought so hard against the latest spate of European Union laws like the Digital Services Act and the Digital Markets Act that are expressly designed to rein American tech companies in. 

How do politicians feel about Section 230?

As much as tech companies enjoy the protection of Section 230, politicians from across the political spectrum take issue with it. 

As Cope explains it, Republican politicians over the past several years tend to feel that, under Section 230, “platforms are taking down too much content—particularly too much conservative or Republican content.” Former President Trump, for example, has called for it to be abolished

“But on the other hand,” says Cope, “You have the Democrats, or more the liberals, who actually think that not enough content is being taken down. They complain about a lot of bad content, like hate speech, which is protected under our First Amendment.” 

In a Wall Street Journal op-ed last month, President Joe Biden called for “bipartisan action from Congress to hold Big Tech accountable,” including amending Section 230 to make the companies more liable for the content they host.

What else is there to know about Section 230?

For better or worse, change could be on the horizon. “It seems like there’s consensus in Congress that after 25 years of Section 230, they want to do something,” says Cope, “but it’s not a hundred percent clear what it is they would do.” 

First though, the Supreme Court has to consider it. Both Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh are being taken under the federal Anti-Terrorism Act, and both hinge on how the court interprets Section 230. In reporting on the first of those cases yesterday, The New York Times said that the court appears leary of making big changes to the law. 

It’s the first time the highest court has considered Section 230, and whatever it decides will have serious implications for the future of the internet around the world. 

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.