The U.S. Supreme Court and Social Media
The original version of this analysis was published on October 19, 2023 in Nexos digital magazine’s section “The Supreme Court Game” in Spanish. This version has been translated to English using ChatGPT 4.0
On Friday, September 29, 2023, the U.S. Supreme Court announced that it had granted certiorari to a number of cases that were submitted to it, including one sent by the U.S. Solicitor General, Elizabeth Prelogar, on August 14.
Certiorari is the power of a court to review decisions from lower courts. The petition that Joe Biden’s administration sent to the Supreme Court earlier this week refers to two cases where the right at stake is none other than the First Amendment to the U.S. Constitution:
"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances."
The cases that the current administration seeks the Supreme Court to review stem from decisions made by Appellate Courts last year regarding laws passed in 2021 in Florida (SB 7072) and Texas (HB 20), which, according to the administration, do not comply with the First Amendment.
Senate Bill 7072: Social Media Platforms (so named because it originated in the Florida Senate), establishes a violation for the “de-platforming of a political candidate or news enterprise” on social media and requires social media platforms to meet certain criteria when restricting users’ freedom of expression (such as notifying the user, making public a report on their process and/or decision, etc.).
Meanwhile, House Bill 20: Relating to complaint procedures and disclosure requirements for and censorship of users' expressions by social media platforms (named because it originated in the Texas House of Representatives) prohibits large social media platforms from deleting, moderating, or labeling posts made by users in that state based on their viewpoints (with exceptions for child sexual exploitation, incitement to criminal activity, and cases of threats of violence), and establishes complaint procedures and disclosure requirements for social media platforms.
It is no surprise that this battle has reached the U.S. Supreme Court. As mentioned, both laws were reviewed by Appellate Courts, but the results were mixed. In the case Moody v. NetChoice (No. 22-277), the Eleventh Circuit Court of Appeals (based in Atlanta, Georgia) decided to strike down the Florida law. Months later, the Fifth Circuit Court of Appeals (in New Orleans, Louisiana), in the case NetChoice v. Paxton (No. 22-555), upheld the Texas law.
The group supporting these laws (mostly Republican) invokes the defense of the First Amendment, arguing that social media platforms have limited—and in some cases curtailed—the freedom of expression of conservatives on their platforms. On the other hand, the group defending the social media platforms (NetChoice and the NGO CCIA) argues that the First Amendment protects companies; in that sense, the ethical codes regulating user behavior on platforms are protected by the same right, giving them the same editorial discretion that traditional media have when deciding what information to display on their pages and apps.
Is it necessary to regulate social media?
Not necessarily. However, codifying rights does not limit individuals. It can clarify our rights and provide maximum protection. Additionally, in international law, many customs (actions that states perform daily) are codified to guarantee rights, obligations, responsibilities, and—most importantly—protections in case of violations.
The internet is sui generis. Unlike a country or institution, which requires organization and a centralized body to make decisions and direct it, the internet began as a place where its founders intended that the government would have no role in regulating it. The original idea was that people would share information or communicate among themselves, and regulation would exist only in the code that maintains sites, apps, and information between devices and individuals.
From the start, both voluntary and involuntary errors emerged, causing viruses, worms, and other computing issues. This led to the creation of various collective solutions, even at a global level, to address these problems without the need for a government or organization to dictate how to act. In fact, companies have hired IT specialists and even co-opted ethical hackers to fix problems on their apps, websites, systems, and networks. Current definitions of threats in this sense are public and regularly updated by members of the internet community.
When it comes to the regulation of social media, it would be complicated to regulate each platform, as they evolve very quickly. We've seen this in the past decade; while a group of people deliberates about a handful of platforms (relevant due to their number of members, of course), many new social networks emerge, some of which may even pose a risk to national security (see the debate in the U.S. regarding TikTok and Russian interference in the country). In the ongoing debate over internet governance, there still isn't a central international mechanism to regulate or govern it. Instead, there are voluntary agreements among all users (civil society, businesses, governments, academics, and international organizations) to cooperate so that the internet operates with the common good in mind.
However, there are unwritten rules on the internet that everyone agrees to follow, such as cyberbullying prevention, data protection (as far as possible through regulations like the European GDPR—another topic for another time), copyright infringement protections, child protection with an emphasis on sex trafficking, and reducing the likelihood of disruptive acts.
In this sense, if social media platforms are to be regulated, they should have rules derived from voluntary agreements among internet users. It is hard to imagine laws being passed that contradict or change already agreed-upon definitions. Nonetheless, it would be important for the Court to emphasize the importance of terms and conditions that prevent the dissemination of dishonest or intentionally misleading posts on social media—a point that would put the common good at the forefront of the First Amendment debate.
Is it possible for the Supreme Court to rule this way?
Unlikely. The questions posed to the Supreme Court are not in this vein, and the debate will not center on this. The focus is whether these laws—by restricting content moderation, requiring individualized explanations for actions, and including general disclosure provisions—are consistent with the First Amendment and whether they violate it due to being motivated by the principle of discrimination.
However, we can see where the debate and the decision of the highest U.S. court are headed. In May of last year, the same Supreme Court issued a (5-4) vote to pause the Texas law while it made its way through the Appellate Courts. During discussions, it was mentioned that social media has transformed how people communicate and get their news, and that the issue is so novel and significant that the Supreme Court will have to address it at some point. Additionally, four of the Justices (Samuel Alito, Neil Gorsuch, liberal Elena Kagan, and Clarence Thomas) stated that, in their opinion, they would have kept the law in place.
It will undoubtedly be an important and very interesting debate to follow in the U.S. Supreme Court.