The U.S. Supreme Court and Social Media (Part II)

The original version of this analysis was published on October 26, 2024 in Nexos digital magazine’s section “The Supreme Court Game” in Spanish. This version has been translated to English using ChatGPT 4.0

The debate over the role of social media and freedom of expression has reached a new turning point with recent decisions by the U.S. Supreme Court (SCOTUS) in Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton, both revolving around state laws in Florida (SB 7072) and Texas (HB 20), respectively.

As I explained previously in this magazine, the contention involves the growing tensions between content moderation on digital platforms and government control over online expression; in other words, freedom of speech, a right enshrined in the First Amendment of the U.S. Constitution.

Although in the two rulings issued on July 1, there are three distinct outcomes, both cases are remarkably similar. This is because, as recalled, the Supreme Court chose to treat them as one combined case. In both, first, SCOTUS overturned the previous rulings by the Fifth and Eleventh Circuit Courts of Appeals. Second, it returned the cases to those lower courts for a fresh analysis. And third, the only difference is that in Moody v. NetChoice, LLC, the company must pay the state of Florida $3,473.08 in case-related costs, while in NetChoice, LLC v. Paxton, the state of Texas must pay the company $8,731.38 for the same.

The reasoning behind the logic of the second ruling is severe: according to the Supreme Court’s opinion, the cases are returned to the Appeals Courts because "neither has conducted an adequate examination of the First Amendment challenges to the state laws."

Although these rulings do not provide a definitive solution at this moment to the debate at hand, they do set an important precedent for future cases involving the regulation of digital platforms.

What Precedent Is Set?

While the Supreme Court has returned the cases to the courts of appeals for more detailed analysis, the message is clear: any law seeking to limit the ability of social media platforms to moderate content will face rigorous constitutional scrutiny. The protection of editorial discretion remains a central principle of the First Amendment, and any government attempt to impose restrictions on such expression, for any reason, will be closely examined.

The Context: Polarization

The Supreme Court notes that the case dates back to 2021, when Florida and Texas passed laws that imposed significant restrictions on major social media platforms, limiting their ability to moderate content and requiring detailed explanations for each piece of content removed or modified. These laws emerged in response to the growing perception that platforms like Facebook, YouTube, and Twitter (now X) exert disproportionate control over public discourse, often censoring certain political opinions under the pretext of maintaining online safety and well-being (recall the Trump ban on X, for example).

Thus, the core of the legal dispute lies in the tension between two fundamental rights: the right of platforms to exercise editorial control over the content published on them and the right of users to free expression in the digital public space. While Florida and Texas argued that their laws aim to protect diversity of opinion and prevent ideological censorship, the platforms argued that these laws violate their freedom of expression by forcing them to host content they disagree with, particularly when such posts violate the platforms' ethical guidelines governing user behavior. Again, the group behind NetChoice, LLC asserts that social media's editorial discretion is protected under the same First Amendment right that grants conventional media outlets discretion over the information they publish on their platforms.

What Did the Parties Propose in Their Briefs to the Supreme Court?

As expected, the Courts of Appeals adopted diametrically opposed positions. The Eleventh Circuit held that Florida's restrictions on content moderation violated the First Amendment by interfering with the platforms' "editorial discretion." In contrast, the Fifth Circuit found that moderation activities did not constitute an exercise of free speech and that the state of Texas was within its rights to regulate the platforms in favor of a "more balanced marketplace of ideas."

As previously anticipated here, the Supreme Court did not rule on the minutiae of "content moderation practices used by giant social media platforms on their most well-known services to filter, alter, or tag user posts," which the Justices criticized for being extensively debated by the Courts of Appeals in their respective decisions. SCOTUS based its opinion around "the full range of activities covered by the [state] laws" and the need to "measure the constitutional applications against the unconstitutional ones... in relation to the First Amendment."

In its opinion, the Supreme Court made it clear that both lower courts had overlooked crucial elements in their analyses. According to the majority opinion, written by Justice Elena Kagan, neither the Fifth nor Eleventh Circuit adequately analyzed how the state laws impacted the various applications of the First Amendment. The Court simultaneously decided that before it could assess whether the state laws were constitutional, the Courts of Appeals must examine the scope of those laws and consider whether their application violates the protected free speech rights of the platforms.

The Protection of Content Moderation

One of the central points of this simultaneous decision is the reaffirmation of a basic principle of the First Amendment: the protection of editorial activities, even when conducted on digital platforms. Citing precedents such as Miami Herald Publishing Co. v. Tornillo (1974) and Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston (1995), the Supreme Court made it clear that the government cannot force media outlets to publish content they disagree with. This principle, the majority of the Court agrees with NetChoice, LLC et al., also applies to large social media platforms, whose editorial functions include selecting and prioritizing the content they present to their users.

The simultaneous ruling emphasizes that content moderation activities, such as the ranking of posts in Facebook's News Feed or YouTube's homepage, constitute a form of protected speech. Platforms not only distribute information but also exercise significant control over what content is highlighted and what is suppressed, thereby shaping a particular discourse that reflects their community guidelines and corporate values.

Future Implications

While this case marks a significant milestone at the intersection of state regulation and digital freedom of expression, it leaves many questions unanswered. For example, how will this ruling affect future state legislation that seeks to restrict or expand platforms' moderation rights? What limits, then, can states impose without violating the First Amendment?

Although there is no immediate answer, these will undoubtedly be the next battlegrounds in the war over control of discourse in the digital age—and we will have the opportunity to see and study them soon as they are analyzed by both Courts of Appeals.

The simultaneous decision of the Supreme Court is a reminder that, in an increasingly digitized world, constitutional protections must adapt to new technologies, without losing sight of the fundamental principles that govern freedom of expression.


Previous
Previous

(101 Series) Cybersecurity, a History: From Undefined Term to Critical Global Issue

Next
Next

Cyber Sabotage and Geopolitical Escalation: The Implications of Israel’s Alleged Role in Lebanon’s Pager and Walkie-Talkie Explosions