The impact on the public sphere is substantial to say the least. In removing so much responsibility, Section 230 forced the characterization of a particular type of business plan, one based not on the information uniquely available from a given service, but on the paid mediation of access and influence. Is. Thus, we ended up with a fraudulently named “advertising” business model — and an entire society engaged in a 24/7 competition for attention. Polarized social media ecosystem. Recommended algorithms that mediate content and optimize for engagement. We’ve learned that humans are most engaged, at least from an algorithmic perspective, by heightened emotions related to the fight-or-flight response and other high-level interactions. In enabling the privatization of the public square, Section 230 has inadvertently created an impossible debate between citizens who are considered equal before the law. Perverse incentives promote negative speech, which effectively suppresses thoughtful speech.

And then there is the economic imbalance. Internet platforms that rely on Section 230 collect personal data for their business goals without adequate compensation. Even when data must be protected or prohibited by copyright or otherwise, section 230 often effectively places liability on the infringing party by requiring takedown notice. This switch in the sequence of events related to responsibility is comparable to the difference between opt-in and opt-out in privacy. It sounds like a technicality, but it’s actually a big difference that causes a lot of damage. For example, workers in information-related industries such as local news have seen a marked decline in economic success and prestige. Section 230 makes the world of data dignity functionally impossible.

To date, content moderation has often been seen as a pursuit of attention and engagement, ignoring regularly stated corporate terms of service. Laws are often bent to maximize engagement through inflammation, which can mean harm to personal and social well-being. The excuse is that it’s not censorship, but is it really? Arbitrary laws, doxing practices and a culture of cancellation have made it difficult to distinguish between serious and well-meaning censorship. At the same time, promoting free speech that inflames bad actors encourages mob rule. This happens under the liability shield of subsection 230, which effectively gives tech companies carte blanche for a short-sighted version of self-serving behavior. The hatred for these companies—which found a way to be more than carriers, and yet not publishers—seems to be shared by everyone in America now.

A well-known trading The unknown is always scary, especially for those who have the most to lose. Since at least some of the network effects of section 230 were foreseeable from its inception, it should have contained a sunset clause. It didn’t happen. Rather than focusing exclusively on the disruption that axing 26 words would cause, it is useful to consider the potential positive effects. When we imagine the world after 230, we discover something surprising: a world of hope and renewal that is inhabited.

In a sense, it is already happening. Some companies are already making their own forays into the post-230 future. For example, YouTube is actively creating alternative revenue streams for advertising, and top creators are getting more options for earning. Together, these voluntary actions offer a different, more publisher-like self-concept. It looks like YouTube is ready for the post-230 era. (On the other hand, a company like X, which leans heavily toward 230, is destroying its value with astonishing speed.) Also, Section 230 has always had exceptions. In some cases to protect it. This means that dating websites have the option of charging a fee instead of relying on a 230-style business model. The existence of these exceptions suggests that more examples will appear in the post-230 world.