Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
Sad Fourth! The Supreme Court disrespects the First Amendment
SochAnam/Getty Images

Happy Fourth! Read the Supreme Court's most recent defense of the First Amendment

Does the government have a right to dictate what you see online or not?

Among the various unhinged responses to the Supreme Court’s opinions from the spring 2024 term, some of the most thoroughly deranged relate to the decision in NetChoice v. Paxton earlier this week. The hysteria around NetChoice — including, for example, the attorney general of Florida claiming as a victory what was clearly a defeat, or the New York Times running an op-ed that described the First Amendment as out of control — belies the fact that NetChoice is a simple case.

It concerns whether the government has a right to dictate what you see online or not.

The true target of any censorship scheme on social media is rarely a corporation or an industry and more often you, the ordinary internet user.

Adding color, NetChoice is about the legislatures of Florida and Texas thinking that they have the power to dictate to social media companies what political opinions can and cannot be expressed on their websites. These states passed laws that said social media companies couldn’t remove content in a manner that was not viewpoint-neutral and would require the companies to carry “conservative” content that, at the time, was widely regarded as being suppressed by social media companies for political reasons. The social media companies sued, saying this violated their free speech rights.

The content-based powers of the kind Texas and Florida sought are routinely exercised by governments outside the United States but almost never exercised here, thanks to the First Amendment. The First Amendment was deliberately designed to get government out of the business of policing political speech, initially by abolishing the common law of seditious libel and later by extending itself by analogy to newer schemes, whether they be criminal syndicalism statutes or content moderation statutes, which seek to achieve similar ends but via different means.

The Supreme Court has long been hostile to laws that offend the First Amendment, and its opinion in NetChoice v. Paxton is no exception; the court, 9-0, vacated the lower court’s ruling and remanded it with a harsh warning to the lower courts: When moderating content, “social media platforms are making expressive choices … because that is true, they receive First Amendment protection.”

That Justice Kagan wrote those words makes it all the stranger that backing the defendants in NetChoice is a group of very strange bedfellows — hard-charging Southern conservative populist attorneys general on the one hand and Manhattan liberal law professors, who wouldn’t be caught dead attending the same cocktail party as those attorneys general, on the other. What such lawyers share, at least among those of us who are more commercially minded, is that they can tend to be long on opinions and short on experience, and their opinions on this matter are lengthy indeed. Even a scintilla of experience with state censorship would reveal that the true target of any censorship scheme on social media is rarely a corporation or an industry and more often you, the ordinary internet user.

And many long-form rationalizations have emanated from the left-leaning supporters of Paxton et al. in this case with a view to obscuring this simple fact. The first of these was a completely bizarre and (judging from SCOTUS’ disposal of the case) legally 100% wrong amicus brief submitted jointly by a number of nationally known, left-leaning law professors, including Larry Lessig, and the American Economic Liberties Project, who argued that NetChoice’s position on this matter — that states have no business policing speech — somehow threatened non-discrimination laws and would “[place] social media beyond the reach of the States’ police power.”

More recently, on Tuesday, July 2, the New York Times published another legal academic op-ed, this time titled, “The First Amendment Is Out of Control,” by Columbia law professor Timothy Wu, who not coincidentally was also one of the co-authors of the aforementioned wacky and wrong amicus brief. Wu rails against the decision, describing the federal judiciary as having “lost the plot” with the decision in NetChoice, “[transforming] a constitutional provision meant to protect unpopular opinion into an all-purpose tool of nullification that now mostly protects corporate interests” and “blithely assuming that [content moderation] decisions are equivalent to the expressive decisions made by human editors at newspapers.”

Which, of course, they are — as anyone who has any experience at the coalface of social media, rather than simply talking about it, would doubtless be aware. Just log into a social media website’s content moderator interface, and you’ll be able to see it plainly; flagged content gets served to an editor, who is then in a position to decide whether the content stays up or disappears. The moderator’s array of buttons, ranging from warnings to mutes, deletions, and bans, is the means by which the moderator metes out editorial justice in accordance with corporate policy and his own discretion. The decision he makes is functionally no different from that of an editor spiking a piece by pressing “send” on an email or a newspaper comment editor deleting a comment below an article in a comment section. It is a human editorial decision carried out by electronic means.

Nor does a content moderation law such as Texas’ police “Big Tech” — because content moderators don’t police “Big Tech’s” speech. So-called “Big Tech” platforms, as a general rule, say very little using their own platforms; where they do, it is done carefully and via, for example, government relations accounts or press releases. This is done for both practical and legal reasons: As a commercial matter, having your users create free content and page views scales considerably better than paying people to do it for you. Legally, thanks to Section 230 of the Communications Decency Act, the law refuses to impute liability to hosts of online services for the speech of their users, subject to certain limited exceptions, so the less you say, the less likely it is that you will get sued. This rule means that a company like, for example, the New York Times, will not be liable for what users of its comment section say, just as a company like Twitter/X will not be liable for what its users tweet.

Content moderation laws, then, do not police companies. They police people who make that content and seek to both impart and receive it from others, with a view to influencing information that those users see in the direction that the law’s writers prefer.

Content moderation laws are tools of social control. Necessarily, they call balls and strikes about viewpoints, choosing which views are disfavored and which ones are not. Otherwise, why have a law at all? That’s how they’re used abroad, and Texas’ social media law — which called for “neutrality” and mandated hosting all viewpoints — was no exception. This law was enacted before Elon Musk bought Twitter and when content moderation on the major websites skewed left. The law required social media platforms not to “censor a user … or a user’s ability to receive the expression of another … based on: the viewpoint of the user or another person [or] the viewpoint represented in the user’s expression or another person’s expression.”

Under this law, a Jewish-themed social media platform would thus be unable to exclude a user from posting Nazi propaganda all over its site. A forum for new mothers would not be able to exclude anti-natalist Malthusians from insulting mothers for having given birth, so long as the insults were viewpoint-based. Forcing companies to carry terrible speech and terrible speakers infringes upon their freedom to decide what they want to say and quite transparently is designed to force a social media company’s users to be exposed to particular ways of thinking.

Once we accept that social media content is a valid subject of government regulation, there will be no end to attempts by the government to patrol it. This is a fact of which Ken Paxton and his apologists are fully aware. Personal insults, political insults, and inconvenient facts are routinely censored by governments all over the world. To see how truly absurd unrestrained censorship regimes can grow, for one recent example from April of this year, the German government sent a criminal investigative inquiry to a U.S. social network because one of its users had dared to make fun of a fat German politician for being overweight. Making a true statement of fact about an obese parliamentarian was apparently contrary to German “defamation of honor” laws, which operate identically to the seditious libel laws the First Amendment once abolished.

The issues in NetChoice v. Paxton are thus very simple to understand. A bunch of Southern conservatives and Northern liberals, mostly lawyer-politicians, think their respective one-party states should have the power to dictate what you say and see online. These individuals are willing to engage in all manner of lengthy legal theoretical gymnastics to disguise the truth of the matter: They are petty censors who hate you for disagreeing with them.

They want to use the the government, backed by the full force of its monopoly on lawful violence, to silence your speech, if not by shutting you up, then by drowning you out.

Our response as a society should be, in the immortal words of Elon Musk, “Message received.”

Editor's note: This article was originally published with the wrong headline.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Preston  Byrne

Preston Byrne

Preston Byrne is a senior fellow of the Adam Smith Institute in London. He also is a lawyer in private practice, dual-qualified in England and the United States, where he focuses on freedom of speech and technology law.
@prestonjbyrne →