Indie Microblogging by Manton Reece

Section 230

“In order to maintain a tolerant society, the society must be intolerant of intolerance.” — Karl Popper

In 1996, the United States Congress passed the Communication Decency Act. At the time, there was concern among web site creators and advocates for the open web that the law would go too far to regulate content on the internet. And a year later, the Supreme Court at least partially agreed, striking down some provisions of the law around “indecent” content.

The full act is part of a telecommunications bill that predates computers and comprises hundreds of sections. Section 230 covers whether platform providers are responsible for content posted on their platforms.

Text of the act was written for the likes of dial-up services AOL and Prodigy, yet remains relevant for today’s web-based platforms:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Over twenty years later, Section 230 of the Communication Decency Act is still one of the only laws we have for guiding how social networks should curate posts. Are they liable for offensive or harmful content? Are they a publisher (like a newspaper) that has editorial oversight of content?

This played out on a large scale with Trump, who had an enormous following on Twitter and often pushed right up to the line drawn by community guidelines, and sometimes over it.

In the protests in Minneapolis in 2020, one of Trump’s tweets was flagged by Twitter, requiring an extra click to view it. From a summary in The Week magazine:

You can still read President Trump’s early-Friday tweet about sending the National Guard into Minneapolis if you go to his Twitter feed, but you now have to take an extra step to read the follow-up tweet threatening: “When the looting starts, the shooting starts.”

Similar to the warning when viewing a sensitive photo or video, Trump’s tweet was flagged and replaced with a warning that it violated Twitter’s rules about glorifying violence.

Trump vs. the Twitter content policy would only accelerate in the years after that. Twitter attempted to fact-check tweets, working with “trusted partners”. Trump tweets that violated the guidelines would remain up but with a disclaimer, and some tweets were essentially locked so that it couldn’t be retweeted or liked, limiting their spread.

Trump signed an executive order pushing back against Twitter, but it was largely ineffective. Reporting from Ars Technica:

The centerpiece of the order is an effort to strip big technology companies of protection under Section 230, a federal law that immunizes websites against liability for user-submitted content. That would be a big deal if Trump actually had the power to rewrite the law. But he doesn’t. Rather, his plan relies on action by the Federal Communications Commission, an independent agency that has shown no inclination to help.

There was also a question of whether a Trump tweet essentially received more coverage because Twitter took action to curate it, as the curation itself became news, further repeating Trump’s claims. In other words, did the process of trying to fix the problem actually make it worse. But the context for the news matters.

While it might be true that more people saw Trump’s tweet because of Twitter’s actions, the context in which they saw the tweet (screenshots on CNN, The New York Times, or other web sites) is completely different. Republishing the tweet puts it in a context of essentially fact-checking it, whereas if the tweet was just retweeted or shared on Facebook to millions of followers, it could do much more damage. One really important feature in Twitter’s “hide this tweet” curation is that it prevents the tweet from being liked or retweeted.


Four weeks before the 2020 election, tech company CEOs were again called to testify before Congress. Democrats thought tech companies weren’t doing enough to moderate misinformation on their platforms, with Section 230 essentially letting them off the hook. Republicans thought tech companies were biased against conservatives.

From Twitter finally flagging Trump’s tweets, to Facebook hiring thousands of content moderators, social networks were finally realizing that whether new laws were written to replace Section 230 or not, they had to do more. Rolling the dice by only letting algorithms surface content with little human oversight wasn’t working.

Generally “more speech” is good, but the Republicans in the senate hearing were talking about censorship when the debate should be about whether anyone has a right to amplify news stories, especially a week before an election. At the time, Twitter had just stopped anyone from linking to a sensational New York Post story about Joe Biden’s son Hunter Biden. Twitter may have overreached, but they tried to err on the side of preventing fake news from going viral, which is a worthwhile goal.

Senator Ted Cruz was frustrated with Twitter CEO Jack Dorsey:

As you know, I have long been concern about Twitter’s pattern of censoring and silencing individual Americans with whom Twitter disagrees. But two weeks ago, Twitter and to a lesser extent Facebook, crossed a threshold that is fundamental in our country. Two weeks ago, Twitter made the unilateral decision to censor the New York Post.

Cruz acted like Twitter controls what the New York Post can publish. The New York Post obviously has their own newspaper and can publish whatever they want.

More transparency and accountability is good. Even better would be to get away from having huge platforms to begin with.


All of this culminated in the January 6th insurrection at the US Capitol. Misinformation about the 2020 election results, combined with private groups for organizing events in Washington DC, led to physical violence and came dangerously close to upsetting the peaceful transfer of power in our democracy.

Twitter and Facebook suspended Trump’s account, and in the following months attention turned again to Facebook. Frances Haugen, formerly of Facebook, leaked internal Facebook documents providing new insight into how Facebook’s ad-based business model created many of its problems with moderation.

In a senate haring, Haugen was a compelling witness, showing deep knowledge of the issues and potential solutions to Section 230. From The New York Times:

Ms. Haugen suggested a change to Section 230 of the Communications Decency Act, the law that protects platforms from being held legally liable for content posted by their users. Specifically, she said she would recommend exempting platform decisions about algorithms from Section 230 protections – so that Facebook and other apps could be sued for their choices about how to rank content in users’ feeds.

We don’t know what the final form of a law adjusting Section 230 will look like, but clearly the days of big social networks being insulated from legal action are over. When they lose that immunity, they will have to dial back the algorithmic parts of their platforms that run uncontrolled with too little oversight.

Next: Unattended algorithms →