The First Amendment reads: Congress shall make no law abridging the freedom of speech. It is not a complicated sentence. The founders did not write "Congress shall make no law unless a private company does it instead." The principle is the same.
I know the counter-argument. Platforms are private companies. The First Amendment only restricts government action. And technically that is correct. But the function of the First Amendment was to prevent the censorship of political speech and the suppression of dissent. When a platform with 3 billion users can effectively remove a political viewpoint from public discourse, the constitutional architecture is being circumvented even if the letter of it is not violated.
This is not a hypothetical. In 2020, Twitter and Facebook suppressed the New York Post story about Hunter Biden's laptop. The story turned out to be accurate. The suppression was acknowledged by Twitter's own then-CEO Jack Dorsey as a mistake. The New York Post is one of the oldest newspapers in America. A story was suppressed in the run-up to a presidential election by decisions made by 28-year-old content moderation employees at a tech company.
That is not a private company making a business decision. That is the exercise of political power without democratic accountability. The solution is not to break up the companies or regulate their editorial decisions. The solution is to require them to host all legal speech and let the market — readers, advertisers, other users — decide what rises and what falls.
I want to start with the one thing I actually agree with: the Hunter Biden laptop suppression was wrong. The New York Post story should not have been suppressed. Jack Dorsey was right to acknowledge it as a mistake. That is a legitimate grievance.
But the legal and philosophical case for requiring platforms to host all legal speech has serious problems that the grievance does not resolve.
First: the First Amendment protects private actors too. Newspapers have the right not to publish letters they disagree with. Bookstores have the right not to stock books they find objectionable. A private company's editorial discretion is itself a form of speech — the speech of choosing what to amplify and what not to. Requiring platforms to host all legal speech is compelled speech, which the Supreme Court has held is a First Amendment violation in Wooley v. Maynard (1977) and Hurley v. Irish-American Gay Group (1995).
Second: what counts as legal speech is not a simple category. Defamation is illegal but the line is contested. Incitement is illegal but the line is contested. Obscenity is illegal but the line is contested. If platforms must host all legal speech, they need to make real-time legal determinations on billions of pieces of content. That is either impossible or requires an army of lawyers per post. The actual effect would be to make moderation so expensive that only the largest platforms could operate — a consolidation that benefits the very companies my opponent is criticizing.
Third: platforms that host everything — 4chan, Parler, Gab — exist and are legal. They are also unusable for normal people because they are drowning in spam, harassment, and extremist content. That is what all legal speech looks like at scale.
The compelled speech argument. Let me be direct about it: the Supreme Court in Hurley and Wooley was dealing with very different contexts — a parade organizer forced to include a group, a state forcing people to display a license plate motto. The question of whether those precedents apply to a utility-scale communication platform with 3 billion users is genuinely unsettled. Justice Thomas wrote a concurrence in 2021 specifically inviting the Court to reconsider how common carrier doctrines apply to social media.
The common carrier argument is the one I should have led with. AT&T cannot refuse to connect your call because it disagrees with your politics. A railroad cannot refuse to transport your cargo. These are utility functions on which the public depends for communication and commerce, and the law has long recognized that businesses exercising that kind of public function can be regulated as common carriers. A platform with 90% of political discourse on it is not a niche bookstore. It is the public square.
The Section 230 argument is the most serious one you have made and I want to engage with it honestly. You are right that the editorial discretion/230 protection linkage is real and that removing one may require changing the other.
But I want to flip the framing: if the price of requiring platforms to host all legal speech is reforming Section 230 so that platforms bear more liability for the content they do host, that may be a feature rather than a bug. Section 230 was written in 1996 for a very different internet. The liability shield it provides has enabled platforms to host illegal content — defamation, harassment, illegal drug sales — with effectively no consequence. A regime in which platforms have both hosting obligations and actual liability for content they choose to amplify would create a different set of incentives.
I am not claiming this transition would be clean or that the litigation environment would be manageable immediately. I am claiming that the current regime — complete discretion plus complete immunity — is also not producing good outcomes.
You just said Section 230 reform is a "feature rather than a bug" of your proposal. That is a significant escalation and I want to sit with it.
If your proposal is: require platforms to host all legal speech AND reform 230 to create liability for amplified content — you have just described a regime that would either bankrupt every platform or create such conservative moderation that nothing controversial ever gets amplified. Companies do not take on massive liability exposure voluntarily. The rational response to liability for amplified content is to stop amplifying anything that could be contested.
The irony: a proposal framed as protecting political speech would produce platforms that are so risk-averse about amplification liability that political speech is effectively deprioritized. You get more speech in the hosting sense and less speech in the visibility sense.
The conservative moderation risk is real and I will not pretend it is not. If platforms are liable for what they amplify, they will amplify less. That may in fact produce a less outrage-driven information environment, which is not obviously bad.
Here is the distinction I want to draw: there is a difference between liability for amplifying content that turns out to be defamatory, and liability for hosting content at all. The regime I am describing would give platforms legal exposure for algorithmic choices that actively promote harmful content, not for the existence of that content on the platform. That is a narrower liability than what you are describing.
But I want to acknowledge that we have moved a long way from the original framing. The debate was about hosting obligations. We are now arguing about the interaction of hosting obligations with liability reform and algorithmic amplification. Those are related but they are not the same question.
Agreed that we have traveled pretty far from the original question. Let me restate where I am.
I do not think platforms should be legally required to host all legal speech. Private companies have editorial rights. Compelled hosting is compelled speech. The common carrier argument fails because these are editorial systems, not neutral conduits.
I DO think: Section 230 needs reform to create better incentives around what platforms amplify algorithmically. Platforms that are effectively monopolies on political speech have public interest obligations that small platforms do not. Transparency requirements for moderation decisions — notice, appeals, consistency — are legitimate and achievable without hosting mandates.
What I am not is a defender of the current regime. The current regime is also bad. But "bad in a different way" is not the same as "the alternative you are proposing is better."
Your position in round 8 — no hosting mandates, but transparency requirements and algorithmic amplification liability — is more reasonable than most of what I hear from the other side of this debate. I want to push on it anyway.
Transparency requirements have a poor track record as a regulatory mechanism. Facebook has published transparency reports since 2013. Twitter published detailed moderation data. None of it produced meaningful accountability because there is no enforcement mechanism and no standard against which the data can be evaluated.
The deeper problem: transparency tells you what happened after the fact. If a political viewpoint is systematically downranked by an algorithm, a transparency report showing that downranking does not restore the speech. The viewpoint has already failed to reach its audience during the period when it mattered.
I am not claiming hosting mandates are the clean solution. I am claiming that transparency-only regulation is how we do nothing while telling ourselves we did something. The laptop story was suppressed in October 2020. If a transparency report had been published in April 2021 confirming the suppression, it would have been historically interesting and politically meaningless. Timing matters. The speech harm is not reversible after the fact.
The transparency critique lands if we are talking about the kind of voluntary reports platforms currently publish, which are indeed mostly useless. That is not what I am proposing.
What I mean by transparency mandates: specific, auditable, legally enforceable disclosure of moderation criteria, applied consistently across political viewpoints. Not a PDF with aggregate numbers. A regime where a platform must document the rule it applied, demonstrate it was applied to comparable content on both sides of the political spectrum, and provide a meaningful appeal process with defined timelines. Violations trigger actual legal liability.
That is not doing nothing. It is a regime under which the laptop suppression would have been a legally actionable violation if a comparable story favorable to conservatives had been allowed to stay up during the same period.
Your point about timing is fair — the harm is done before the report comes out. The answer is faster enforcement mechanisms, not hosting mandates. A platform that knows it is subject to real-time auditing for consistent application of its stated rules has very different incentives than one subject to a voluntary annual report. Speed of enforcement is a design choice. It is not a structural limitation of the transparency approach that you cannot fix without moving to hosting mandates.
The principle I want to defend: when a private company exercises public power at scale — when decisions made in a conference room in Menlo Park determine what 3 billion people see and do not see about the world — those companies have obligations to the public that go beyond what we normally require of private businesses.
Whether the legal mechanism is common carrier regulation, must-carry obligations, or a reformed Section 230 with hosting and amplification rules, the principle is the same: speech that is legal should be accessible. The public square should not have a private owner who can decide who gets to speak.
My opponent gave me the strongest possible counterarguments and they are real. The compelled speech concern is real. The 230 interaction is complicated. Conservative moderation risk is real. I do not have clean answers to all of them.
But the alternative — platform owners with unchecked power to suppress legal speech on political grounds, as happened with the laptop story — is worse. The discomfort of reforming how these platforms operate is less than the danger of leaving them unreformed.
Private companies have the right to make editorial decisions. That right is itself protected by the First Amendment. Requiring platforms to host all legal speech is constitutionally problematic as compelled speech, operationally unworkable at scale, and likely to produce a consolidation that leaves us with fewer and larger platforms rather than more speech.
I agree with my opponent that the status quo is unacceptable. The laptop story was a real failure. The lack of transparency, consistency, and accountability in platform moderation is a real problem. The solution is transparency mandates, appeals processes, and Section 230 reform that creates better incentives around algorithmic amplification. Not hosting mandates.
The thing I want readers to take away: this is not a debate between people who care about free speech and people who do not. It is a debate about which legal mechanism best protects the ability of people to communicate across the political spectrum. We disagree on the mechanism. We agree that the current situation is a failure.