This is part two of a five-part story. To start from the beginning, go here.
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." —Communications Decency Act of 1996, Section 230(c)(1)
What does that mean? In short, web platforms aren't legally liable for what their users post. If content posted by a Facebook user is defamatory, obscene or harmful to another person, the user can be sued—but Facebook cannot.
That rule is very different from the standards for other publishers. If a newspaper, for example, publishes a letter to the editor that contains a false and defamatory allegation against the mayor, he can sue the newspaper for damages. But if a commenter makes the same false and defamatory claim in a Twitter thread, the mayor can't sue Twitter.
That gives social media platforms far greater legal immunity than anybody else gets under the First Amendment.
There are exceptions: If users break other federal laws, web platforms can be liable. And while Section 230 is often blamed for hate speech on the internet, that's not accurate: Hate speech, like most opinions, is already protected by the First Amendment.
Nothing in Section 230 prevents Facebook and others from exercising judgment and excluding content—it just frees them from liability should they choose not to. Plenty of platforms operate under the protection of Section 230 and use it responsibly: Pinterest is a good example. In fact, such responsible use was what Wyden envisioned.