On Sept. 16, the Fifth Circuit issued its opinion in NetChoice v. Paxton, upholding the controversial Texas law that limits the ability of large social media platforms to moderate content and also imposes disclosure and appeal requirements on them. The Fifth Circuit had previously stayed a district court injunction against the law, but the Supreme Court voted 5-4 to vacate the stay. The opinion opens up a stark circuit split with the Eleventh Circuit, which had ruled that a Florida law that also imposed content moderation restrictions on platforms violated the First Amendment. Unless the platforms get another stay pending rehearing en banc by the Fifth Circuit or review by the Supreme Court, the Texas law will go into effect, with potentially massive consequences for how the major social media companies moderate their platforms.
The initial reaction to the decision among policy experts and legal scholars has been, to put it mildly, harsh. It’s been called “legally bonkers,” a “troll to get SCOTUS to grant cert,” an “angrily incoherent First Amendment decision,” and “the single dumbest court ruling I’ve seen in a long, long time.” As someone who has argued for the constitutionality (and indeed desirability) of some government regulation of platform content moderation, I was hoping that the first judicial decision upholding such regulation would be a thoughtful and measured approach to what is indisputably a hard, even wicked, problem.
Unfortunately, the Fifth Circuit’s decision, written by Judge Andrew Oldham, is decidedly not that. Although not without its good points, it is largely a crude hack-and-slash job that misstates the facts and the law and ignores the proper role of an intermediate court, all in a sneering tone that pretends that those who disagree with it are either stupid or evil. It’s an extreme example of First Amendment absolutism: the insistence that the First Amendment has either nothing to do with content moderation or that it provides maximum constitutional protections to such practices. The opinion deserves to be swiftly overruled, either by the full Fifth Circuit or by the Supreme Court.
The opinion is long and complex, and there is much to be said about its merits and (mostly) demerits. In this section, I summarize the opinion, saving my comments for later.
The court first describes the key provisions of HB 20, as the Texas law is generally known. Section 7, the most controversial part of the bill and the one that has gotten the most attention, states:
A social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on: (1) the viewpoint of the user or another person; (2) the viewpoint represented in the user’s expression or another person’s expression; or (3) a user’s geographic location in this state or any part of this state.
Section 2 of the law imposes additional requirements on platforms, including moderation disclosures, a “biannual transparency report,” and a system for user complaints and appeals. Remedies for violations of the statute are limited to injunctive relief, along with attorney-fee recovery in certain instances.
After describing HB 20 and the procedural history of the case, the court rejects the platforms’ attempt to facially challenge the law—that is, to argue that the law should be enjoined even before it goes into effect. In particular, the court rejects the platforms’ argument that HB 20 is overbroad in that a “substantial number of its applications are unconstitutional, judged in relation to the statute’s plainly legitimate sweep.” It does so for several reasons, most importantly that HB 20 “does not chill speech; instead, it chills censorship,” and that, even to the extent that HB 20 affects speech, it is only speech that is “at best a form of expressive conduct,” rather than “pure speech.” The court rejects the platforms’ concern that HB 20 would require them to host “pro-Nazi speech, terrorist propaganda, [and] Holocaust denial[s],” arguing that such concerns are “borderline hypotheticals” and are not the core of the speech that the statute seeks to protect. And the court argues that there is no need to consider whether the law is overbroad, an analysis undertaken to “protect third parties who cannot ‘undertake the considerable burden’ of as-applied litigation and whose speech is therefore likely to be chilled by an overbroad law.”
The court next turns to the heart of the opinion, the substantive First Amendment question. Rather than starting with the existing caselaw, Judge Oldham writes: “As always, we start with the original public meaning of the Constitution’s text.” Finding that the original public meaning of the First Amendment was chiefly “a prohibition on prior restraints and, second, a privilege of speaking in good faith on matters of public concern,” the court holds that HB 20 does not run afoul of the First Amendment.
The court then addresses the relevant Supreme Court cases. On one side of the argument, it says, are cases like Miami Herald v. Tornillo, in which the Supreme Court struck down on First Amendment grounds a Florida statute that required newspapers to provide a “right of reply” to political candidates. On the other side are cases like Rumsfeld v. FAIR, in which the Supreme Court held that the First Amendment did not give universities the right to exclude military recruiters, and PruneYard Shopping Center v. Robins, which allowed a state to force a private shopping center to allow members of the public to distribute leaflets. The Fifth Circuit held that the Tornillo line of cases did not apply, because there is no “intimate connection” between user content and the platforms themselves, the latter of which, the court claims, “exercise virtually no editorial control or judgment.”
To support its argument that platforms should not be viewed as First Amendment speakers with respect to the content they host, the court looks to Section 230 of the Communications Decency Act of 1996. Section 230 is the landmark law that immunizes platforms from liability for almost all of the content they host and that, prior to the Texas and Florida social media bills, was the main site of legal debate over content moderation practices. The court views Section 230 as reflecting “Congress’s judgment that the Platforms do not operate like traditional publishers and are not ‘speaking’ when they host user-submitted content.” And it rejects the dominant judicial view that Section 230 gives platforms carte blanche to moderate, arguing that it permits moderation only on a narrow set of grounds.
Having held that the First Amendment does not protect platform moderation (or, in the court’s words, “censorship”), the court then argues that Texas can lawfully characterize platforms as “common carriers”—that is, “communication and transportation providers that hold themselves out to serve all members of the public without individualized bargaining”—and impose nondiscrimination provisions on them.
The court then concludes its First Amendment analysis of Section 7 by holding that, even if content moderation is protected by the First Amendment, HB 20 is constitutional. The court holds that HB 20 is a content-neutral regulation and thus need only satisfy “intermediate scrutiny,” under which “a content-neutral regulation will be sustained under the First Amendment if it advances important governmental interests unrelated to the suppression of free speech and does not burden substantially more speech than necessary to further those interests.” The court holds that HB 20 furthers Texas’s “fundamental interest in protecting the free exchange of ideas and information in [the] state” and that HB 20 is not overly burdensome because the alternative—a state-run social media site—would not be successful, given the market dominance of the incumbent platforms.
Having upheld Section 7, the court then turns to Section 2 and its transparency, disclosure, and complaint-appeal requirements. It holds that these provisions satisfy the test set out in Zauderer v. Office of Disciplinary Counsel, under which the government can require commercial enterprises to disclose “purely factual and uncontroversial information” about their services as long as those disclosures are not “unjustified or unduly burdensome … by chilling protected commercial speech.”
The court closes its opinion by addressing the Eleventh Circuit’s opinion that struck down Florida’s social media moderation law. The court first distinguishes the two laws, noting that (1) the Texas law permits more content moderation than does the Florida law, although it applies to more users; (2) the Florida law goes beyond the Texas law in prohibiting platforms from appending their own speech to user content; and (3) the Florida law’s remedies—$250,000 per day for certain violations—are far more punitive than the Texas law’s primarily injunctive remedies. But the court also disagrees with some of the Eleventh Circuit’s core legal reasoning, principally the Eleventh Circuit’s holding that Miami Herald applies to laws seeking to restrict content moderation.
Judge Edith Jones wrote a short concurrence, calling the platforms’ arguments “ludicrous” and the platforms the “Goliaths of internet communications,” as compared with the “Davids who use their platforms.” Judge Leslie Southwick concurred in part and dissented in part. Importantly, he disagreed with the majority’s holding that the First Amendment did not apply to the platforms’ content moderation decisions and that HB 20 satisfied intermediate scrutiny.
Before I get into my (many) criticisms of the opinion, let me say a few things in its defense. There is something refreshing about courts finally showing some skepticism toward giant technology companies. Decades of extravagant judicial solicitude, on both statutory and constitutional issues, for internet giants have led them and their supporters to be complacent and overconfident in the face of government regulation. The First Amendment should protect the rights of giant corporations only insofar as such protection redounds to the expressive benefits of users and listeners. In other words, it is good, as the Fifth Circuit wrote, that “the Platforms cannot invoke ‘editorial discretion’ as if uttering some sort of First Amendment talisman.” Talismans, like all categorical rules, are a poor fit for difficult regulatory issues involving large swaths of economic and social life. If nothing else, the Fifth Circuit decision widens—indeed blows out—the legal and policy Overton window on platform governance.
Indeed, although (as I explain below) most of the opinion badly overreaches, the court’s skepticism of digital corporate power leads it to reason creatively and compellingly in certain respects. For example, its holding that, to the extent the Texas law does implicate the First Amendment, the proper standard of review is intermediate scrutiny, is a promising avenue for analyzing content moderation laws. Intermediate scrutiny is the closest that American law has toward the flexible, fact-based proportionality review that is best suited to resolve complex questions of constitutional law and policy.
And the court is also correct that the state interest in such laws—the “fundamental interest in protecting the free exchange of ideas and information in this state”—is indeed an important one. In this (but only this) respect, the opinion is more thoughtful than that of the Eleventh Circuit, which unconvincingly claimed that “there’s no legitimate—let alone substantial—governmental interest in leveling the expressive playing field” and that neither is there a “substantial governmental interest in enabling users—who, remember, have no vested right to a social-media account—to say whatever they want on privately owned platforms that would prefer to remove their posts.”
The most interesting (though certainly not uncontroversial) part of the opinion is the court’s analysis of applying common-carriage principles to social media platforms (Part III.E). It’s striking how much this part of the opinion, written by a Trump appointee with unimpeachable conservative credentials, deviates from conservative orthodoxy on government regulation and granting corporations expansive First Amendment rights. (It is perhaps a notable sign of the fissures in the conservative legal movement that Judge Jones, a Reagan appointee, did not join this part of the opinion.) If one ignores the context of the rest of the opinion, one could easily imagine this section to have been written by a progressive “neo-Brandeisian” scholar operating from within the growing law and political economy movement. Under the most plausible reading of Supreme Court precedent, it is almost certainly wrong (because common-carriage regulation is inconsistent with platform moderation decisions being protected by the First Amendment), but it offers a compelling model for what the Supreme Court could decide to do. And whatever the arguments’ merits, it demonstrates that this isn’t your grandparents’ conservative legal movement.
Repeatedly, the court misstates the law—or at best puts forward highly tendentious arguments as if they were obviously correct. As Genevieve Lakier nicely puts it, reading the opinion “feels like entering the upside down.”
Consider, for example, the central doctrinal move, that of distinguishing Miami Herald. One reason the court distinguishes Miami Herald is because platforms, unlike newspapers, have unlimited capacity. While Miami Herald did indeed note that a right-of-reply statute could limit a newspaper’s editorial resources, it also explicitly stated that this factor was ultimately irrelevant:
Even if a newspaper would face no additional costs to comply with a compulsory access law and would not be forced to forgo publication of news or opinion by the inclusion of a reply, the [right-of-reply] statute fails to clear the barriers of the First Amendment because of its intrusion into the function of editors.
In other cases, the court uses question-begging sleights of hand. For example, it argues that platforms cannot claim that their “editorial discretion” gets First Amendment protection because “an entity that exercises ‘editorial discretion’ accepts reputation and legal responsibility for the content it edits” and “[p]latforms strenuously disclaim any reputational or legal responsibility for the content they host.” But the court never explains why public acceptance of responsibility is necessary as a matter of constitutional law, for First Amendment protection. And if such acceptance was necessary, the court doesn’t explain why the platform terms of service don’t count. After all, one point of having terms of service is to signal to the public what sort of platform one claims to be.
As to legal responsibility, the main reason why platforms disclaim legal responsibility is because Section 230, as it has been interpreted by most courts for nearly three decades, provides a liability shield. Whether that shield is good or bad as a matter of policy is its own question, but it has nothing to do with the fact that it does indeed provide a liability shield. If Section 230 were to apply to newspapers, it would be legal malpractice for newspaper general counsels to not also disclaim legal responsibility for what their newspapers covered. But that wouldn’t change the fact that newspapers exercise editorial judgment.
Indeed, the court’s entire treatment of Section 230 is a confusing mess. It uses the law to argue that Congress would agree with its view that “platforms do not operate like traditional publishers and are not ‘speak[ing]’ when they host user-submitted content.” Whether or not a hypothetical congressional view on the nature of internet platforms should have any bearing on the First Amendment, Section 230 doesn’t provide anything close to a clear answer as to how Congress would want a case like this resolved. Section 230 was one part of a much larger law (most of the rest of which was ultimately struck down on First Amendment grounds), and it sought to encourage platform moderation in the short term so that, in the long term, the internet could flourish.
Perhaps realizing that its argument fits awkwardly with the generally accepted understanding of Section 230, the court tries to read Section 230 narrowly, specifically its (c)(2) liability shield for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” The court argues that “otherwise objectionable” should be read as limited by the other listed categories of content. To be fair to the court, this is a conceivable reading of the statute, but it also goes against nearly 30 years of judicial and scholarly consensus.
The opinion’s legal weaknesses are bad enough. Its factual inaccuracies are even worse. The court rejects the facial challenge on the grounds that platforms have resources that individual litigants do not. That is certainly true, but it is also the case that platforms are facing an immense technological and organizational burden: running the digital public square for hundreds of millions of Americans, not to mention billions more people around the world. That should not exempt them from government regulation, but it is a reason to be careful about vague and underbaked government mandates.
The court seems unaware of how platforms actually operate, even at the most basic level. For example, it cites statements by social media companies that “[w]e don’t want to have editorial judgment over the content that’s in your feed,” while ignoring that some of these statements are nearly eight years old and come before the massive increases in content moderation that the platforms have undertaken since then.
At points the court seems almost purposefully blind to modern content moderation practices. It rejects the platforms’ concerns that the law would require them to host “pro-Nazi speech, terrorist propaganda, [and] Holocaust denial[s],” arguing that such concerns are “borderline hypotheticals.” It argues that platforms are “nothing like the newspaper in Miami Herald” because they “exercise virtually no editorial control or judgment.” These assertions would come as news to the 15,000 people that Facebook employs to moderate 3 million pieces of content every day, including plenty of pro-Nazi speech, terrorist propaganda, and Holocaust denial.
The court also states that platforms don’t exercise editorial discretion because they don’t prescreen content. That’s simply wrong. Platforms are increasingly using algorithms to screen content before it is posted. The court waves away the argument that algorithms should count as “substantive, discretionary review akin to newspaper editors,” but it never explains why this should be dispositive. The court refuses to consider the possibility that algorithms both encode moderation choices and communicate those choices to outside observers. And it’s not hard to find instances where this has been the case (for example, controversies over removals of pictures of breastfeeding mothers).
This willful ignorance continues when the court—grudgingly assuming that platform moderation is covered by the First Amendment—applies intermediate scrutiny. Its breezy holding that HB 20 “does not burden substantially more speech than necessary to further Texas’s interests” is so breathtakingly perfunctory that its worth reproducing in full:
This is perhaps best illustrated by considering the Platforms’ main argument to the contrary: that “[i]f the State were truly interested in providing a viewpoint-neutral public forum, the State could have created its own government-run social-media platform.” The same network effects that make the Platforms so useful to their users mean that Texas (or even a private competitor) is unlikely to be able to reproduce that network and create a similarly valuable communications medium. It’s almost as absurd to tell Texas to just make its own Twitter as it would have been to tell broadcasters to just make their own cable systems. And aside from this bizarre claim, the Platforms offer no less restrictive alternative that would similarly advance Texas’s interest in “promoting the widespread dissemination of information from a multiplicity of sources.”
The issue of whether Texas should set up its own BrisketTube is completely irrelevant. But more importantly, this is the entirety of the court’s analysis regarding the burden on the platforms’ speech—which is to say, no real analysis at all. Of course, if, as the court argues in the previous sections of the opinion, content moderation is categorically not speech, then of course the Texas law does not infringe upon more speech than is necessary (since it doesn’t infringe on speech at all). But in a section applying intermediate scrutiny, the court has to consider—at least for the sake of argument—that the Texas law does in fact infringe on the platforms’ speech. And speech aside, there’s no easy switch that platforms can flip to comply with the Texas law, which will require them to spend vast technological and organizational resources.
All of these problems stem from the court’s insistence on reductive, binary thinking. It’s true that Miami Herald and its ilk are an awkward fit for social media platforms—I myself have made this argument many times. But Rumsfeld or PruneYard are not perfect fits either. Similarly, contrasting “censorship” with “free speech” is overly simplistic; some degree of moderation is necessary to enable others to speak. What’s needed is to develop new, intermediate frameworks to adjudicate issues in an accurate, fact-specific way. Unfortunately, the court’s approach does none of that.
It is hardly unheard of for a judge to make mistakes of law or facts. It happens all the time and, though it’s never a good thing, it’s a normal part of the self-corrective mechanism of arguments, opinions, appeals, and critical commentary.
But sometimes an opinion shows such basic deficiencies in judicial craft that one has to question the soundness not just of the opinion itself, but of the entire approach of its author. This, unfortunately, is one of those opinions.
To start, Judge Oldham seems to have forgotten that he is not a Supreme Court justice but is rather the second-most junior of 17 judges on a court that is itself but one of 13 courts of appeal. In other words, his job in the first instance is to follow Supreme Court precedent as far as it can take him.
Thus, what in the world is one to make of this dismissive remark, which begins Oldham’s analysis of Supreme Court precedent: “Rather than mount any challenge under the original public meaning of the First Amendment, the Platforms instead focus their attention on Supreme Court doctrine.” Well, yeah—that’s generally how constitutional litigation works, even at the Supreme Court. (And lower courts are not supposed to ignore binding precedent because they think that the Supreme Court will change its mind.)
Putting aside the question of whether Oldham gets the law right—for the reasons described above, I think he does not—it’s downright bizarre for him to begin his opinion, “as always,” with the original meaning of the First Amendment. First off, this originalism is highly selective, since it does not address the question of whether, as an original matter, courts of appeal are permitted to dispense with Supreme Court doctrine in favor of their own historical analysis.
But more fundamentally, originalism, whatever its status as “our law” in other constitutional domains, is simply not compatible with the vast majority of modern First Amendment doctrine. (Which is overwhelmingly a product of 20th century legal sensibilities—not to mention the challenges of applying 18th century law to 21st century technology.) One could, of course, rebuild First Amendment doctrine on strictly originalist grounds; that is arguably Justice Clarence Thomas’s long-standing goal and, given the increasingly conservative composition of the Supreme Court, it may be the future of First Amendment jurisprudence. But it would be a major and highly disruptive undertaking, and one that is only appropriate for the Supreme Court, not a lower court, to undertake.
Questionable methodology aside, what is most off-putting about the opinion is its tone, which, as Blake Reid well captured, combines “condescension, cherry-picking, overclaiming, and obviously motivated reasoning.” The opinion repeats, over and over again, some variation of “censorship isn’t speech,” as if repetition and the liberal use of italics constitutes a legal argument. It drips with contempt toward the platforms, which it dismisses as “well-heeled corporations that have hired an armada of attorneys from some of the best law firms in the world to protect their censorship rights.” It delights in pointing out inconsistencies in the platforms’ public statements (to be sure, a fair criticism) that it frequently seems more interested in trolling the platforms than in faithfully applying binding precedent.
The judicial virtue of humility need not imply timidity, simply an understanding of the complexity of real-world problems and the fallibility of judges. Judge Oldham would have benefited from adopting the perspective of Judge Southwick, who dissented from the primary First Amendment holdings: “None of the precedents fit seamlessly. The majority appears assured of their approach; I am hesitant.” Oldham needn’t have agreed with Southwick on the merits; even Justice Samuel Alito, hardly the cautious jurist, has observed that “it is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies,” even as he voted to allow the Texas law to go forward.
What Comes Next?
The Texas law will soon come into force. What happens next depends, in the first instance, on the platforms themselves. Daphne Keller speculates that the platforms could comply with the law by disabling moderation by default, and then allowing users, who will suddenly be “flood[ed] with the garbage [Texas] asked for,” to easily opt in to the moderated version they’re used to. Otherwise, it’s hard to see why the platforms would take the risk of continuing to do business in Texas. Complying with the law would upend platforms’ already fragile content-moderation practices and running two different systems, one for Texas and one for the rest of the world, has its own obvious challenges (and is likely illegal under the law’s location-based provisions). Far from promoting free expression, the law may well lead the platforms to geoblock Texas and its users in the entirety in order to avoid Texas’s jurisdiction.
Legally, the platforms could petition for rehearing by the full Fifth Circuit en banc, as could any judge on the court. While the Fifth Circuit continues to be one of the most conservative circuits in the country, Oldham’s opinion is so extreme that even his conservative colleagues may want to at minimum sand down its more extreme edges.
If the opinion stands, the issue is likely to end up in the Supreme Court. Not only is it one of immense national importance, but there is now a clear circuit split between the Fifth and Eleventh Circuits (as Oldham’s detailed, eight-page criticism of the Eleventh Circuit’s opinion makes clear). It also implicates issues beyond just the First Amendment, including the proper interpretation of Section 230 (specifically what “otherwise objectionable” means in (c)(2)) and whether state regulations of content moderation are compatible with the dormant Commerce Clause.
As I’ve argued before, “Once the issue gets to the Supreme Court, it’s far from clear that the issue will be resolved in the technology companies’ favor.” Both Big Tech–skeptical conservatives and pro-regulatory liberals may find common cause in upholding some government regulation, though almost certainly not to the extent that the Fifth Circuit has.
It’s rare that a legal issue comes to the Supreme Court as a true loose ball. These moments are exciting because they hold open the promise of creative legal problem solving across the traditional liberal-conservative divides. But they require humility, pragmatism, and a willingness to see all sides of a difficult issue. The Eleventh Circuit tried but struggled; the Fifth Circuit didn’t even try. Here’s hoping that the Supreme Court does a better job.