The U.S. Supreme Court has been asked to rule on the liability of computer service providers (such as social media and other online platforms) for content posted by their users. The issue, known overseas as“internet liability,” has been the focus of heated debate for years, and the Court’s intervention helps highlight the need for organic reform in the U.S. legal system.
At the center of the issue is Section 230 of the Communications Decency Act 1996[1], which has over the past three decades provided a kind of immunity to online platforms for user-generated content: if a user posts offensive, defamatory, or otherwise illicit content on a platform, in fact, the platform cannot be held liable for it.
Despite the high expectations placed on the pronouncement in the case Gonzalez v. Google, the Supreme Court chose not to intervene directly on the rule’s applicability to the complex dynamics of algorithms that select content based on users’ tastes. On the contrary, she expressed institutional discomfort at being asked to rule on an issue that would finally require congressional action.
However, in the judgment issued at the same time in a related case (Twitter v. Taamneh), the Justices offered interesting insights into the liability profiles of platforms for activities carried out by users – albeit under the different legal profile of aiding and abetting terrorism.
Section 230 of the Communications Decency Act
In the 1990s, the Communication Decency Act initially threatened to become a gag on the very young Internet, still largely lacking any kind of regulatory embankment. The U.S. legislature, in fact, had felt the need to fill this gap by extending regulations on “obscene and indecent”[2] communications aimed at minors under 18 and on the prohibition of distribution of “patently offensive” materials that were also available to minors under 18.
Originally, the dominant concern with respect to content available on the Internet was about uncontrolled access to pornography by minors. In the early stages of the legislative process, in fact, no particular attention had arisen toward the possible repercussions on web operators.
In contrast, the origin of the amendment that introduced Section 230 is to be found in two decisions made by New York judges during those years.
In the first one, Cubby, Inc. v. CompuServe (1991), it was argued that CompuServe could not be held liable for defamatory comments posted in one of that company’s forums because it did not review the content before it was posted, but merely hosted it on its platform.
In 1995, however, in Stratton Oakmont, Inc. v. Prodigy Services Co. the conclusion was different: because Prodigy carried out moderation activities on its online message boards and deleted certain messages for “offensiveness and bad taste,” it was argued that it could be held liable for content posted on its platform.
Two Republican representatives in Congress, Ron Wyden and Chris Cox, therefore proposed an amendment to exclude the liability of providers for content posted by users, even if moderation activity was carried out on the platform.
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[3]
According to some U.S. authors[4], the foundation for the development of the mammoth American web industry was laid on these 26 words.
In addition, Section 230 also provides that “No provider or user of an interactive computer service shall be held liable for
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user deems obscene, lewd, filthy, excessively violent, harassing, or otherwise objectionable, if such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to the material described in the paragraph.” [traduzione a cura dell’autore].[5]
Thus, both possible interpretations with respect to the active or passive actions of ISPs toward user content were resolved, in each case limiting their liability.
Following the approval of the Communications Decency Act, however, numerous protests from civil rights groups followed, questioning the constitutionality of the rest of the legislation and the prohibitions it contained, which were considered contrary to the First Amendment (which protects freedom of thought and expression).
In 1997, therefore, the case Reno v. American Civil Liberties Union came before the Supreme Court, which declared unconstitutional the parts of the text that restricted “obscene and indecent” content, fearing that health-related materials, such as techniques for preventing the spread of AIDS, might also fall under the definition.
Although the prohibitions initially introduced by the Communications Decency Act had disappeared, Section 230 remained in force, maintaining the condition of immunity of providers for all kinds of content posted by their users.
This has allowed Internet platforms to grow and thrive more easily without being stifled by costly content audits and without having to restrict freedom of expression online.
However, in recent years, there have been many harsh criticisms of Section 230 because it allows platforms to tolerate defamatory content, misinformation and incitement to violence. In addition, several authors argue that while platforms guarantee the right to free expression, they still do not do enough to remove offensive content and protect users[6].
Previous cases
In the recent past, the issue has stimulated interesting evaluations by Supreme Court justices, particularly in the cases Force v. Facebook Inc. (2019)[7] e Malwarebytes, Inc. v. Enigma Software Group USA, LLC. (2019)[8]
The first, in particular, concerned the case of a police officer killed by an ISIS-affiliated terrorist group. The mother accused Facebook of providing support to terrorists by allowing them to use the platform to spread their propaganda message and organize attacks.
The basis of the platform’s alleged liability would be mainly the operation of the algorithm and the tendency to create “echo-chambers,”[9] favoring the discovery of content compatible with users’ ideas or tastes.
The U.S. Second Circuit Appeals Court had thus ruled-for the first time-that Section 230 also protects platforms such as Facebook from civil actions brought by victims of terrorism. However, following the appeal filed by the relatives of the victims, the Supreme Court had rejected the request to examine the issue[10].
Nevertheless, Judge Katzman’s dissenting opinion had pointed out that the “active” role of certain platforms in users’ choices was worthy of greater consideration: “Growing evidence suggests that ISPs have designed their algorithms to steer users toward content and people with whom the users themselves agree-and that they have done this too well, pushing sensitive souls further and further down dark paths.” [traduzione a cura dell’autore]” [11].
The issue of internet provider liability, therefore, after being rejected again by the Court in the subsequent Malwarebytes, Inc. v. Enigma Software Group USA, LLC. (in which Justice Thomas’ dissenting opinion cited that of Justice Katzman in Force v. Facebook Inc.), has now finally come before the Justices in the context of three different cases: Gonzalez v. Google[12], Twitter v. Taamneh[13] e Clayborn v. Twitter[14].
Gonzalez v. Google
Nohemi Gonzalez, a U.S. citizen, was being killed in the 2015 Bataclan terrorist attack in Paris.
The following day, ISIS claimed authorship of the attack, releasing a written statement and a YouTube video.
Gonzalez’s father then acted against Google, Twitter, and Facebook, claiming, among others, that Google had aided and abetted international terrorism by allowing ISIS to use its platform, particularly YouTube, “to recruit members, plan terrorist attacks, issue terrorist threats, instill fear and intimidate civilian populations.”[15] He also claimed that the very use of computer algorithms that suggest content for users based on their viewing history helped ISIS spread its message. In addition, Google’s YouTube monetization system also allegedly caused the algorithm to evaluate and approve content from ISIS, resulting in revenue sharing with individuals linked to the terrorist organization.
In the first two levels of court, the motion to dismiss
[16]
of Google was granted, as was the case in previous cases
[17] .
Twitter v. Taamneh
In contrast, the second case in decision originates from the 2017 terrorist attack on Istanbul’s Reina[18] and concerns aiding and abetting charges against Google, Twitter and Facebook for failing to take meaningful measures to prevent the use of their services for terrorism purposes. In this case, after the initial dismissal in the first instance, the Court of Appeals for the Ninth Circuit reversed the decision, finding that there was a direct link between the dissemination of the ISIS message by social platforms and the harm caused to the victims of the attacks[19].
Gonzalez v. Google: the decision
The cited cases were treated jointly by the Supreme Court, which encapsulated its reasoning on Section 230 in the Gonzalez ruling, while using the Twitter ruling to rule on aiding and abetting liability as enucleated from the text of the Anti-Terrorism Act
[20] .
According to reports in the U.S. media, at the hearing the Supreme Court Justices had already expressed considerable perplexity at the appropriateness of deciding the future of the Internet, where it should be the legislature that takes action to bring about such a turnaround: “Isn’t it better to keep it the way it is, for us, and to put the burden on Congress to change that and they can consider the implications and make these predictive judgments?“[21]
Numerous reform proposals have been put forward in recent years by both Republican and Democratic congressmen, some to eliminate the text altogether, others to modify it
[22] .
And indeed, it comes as little surprise that the Supreme Judges’ decision was not to rule on the issue of Section 230 application at all: “[…] we think it is sufficient to recognize that much (if not all) of the plaintiffs’ appeal appears to be dismissed on the basis of our decision in Twitter or by the Ninth Circuit Court’s unchallenged determinations below. We therefore decline to address the application of §230 to a complaint that appears to be able to assert few-if any-claims. [traduzione a cura dell’autore]”[23]
The Supreme Court, therefore, ruled exclusively on the issue of the applicability of aiding and abetting liability under the Anti-Terrorism Act (in Twitter v. Taamneh), avoiding directly addressing the sensitive issue that has agitated observers (lawyers and others) for months.
The motivations of Twitter v. Taamneh
Specifically, two interesting insights can be drawn from the reasons expressed in the Twitter case by Justice Thomas , in terms of assessing the liability of ISPs with respect to how they manage content.
In the argumentative path, the judgment confronts the relevance of the provider’s awareness of the presence of customers/users using the provided service for illicit purposes (e.g., the presence of ISIS on YouTube).
In this regard, the Supreme Court relied on the concept of neutrality of provider action (already used by the 9th Circuit Court of Appeals with respect to the algorithm’s operation)[24] and pointed out that “distant inertia” cannot be transformed into conscious and substantial assistance to terrorist activity; therefore, it found insufficient, per se, the observation that the platforms under consideration do anything more than transmit information for billions of people (through user preference analysis).
In addition, with specific reference to Google and YouTube’s monetization system, the plaintiffs allegedly failed to bring concrete evidence of a substantial contribution made to ISIS or its members, either in terms of the amount of monies paid or with respect to the number of accounts and content approved by the platform.
Therefore, it would not be possible to claim that Google provided assistance to ISIS, either in the 2017 Istanbul attack or in the organization’s other activities of a terrorist nature.
The future of the Internet
The Supreme Court’s almost pilatesque decision certainly meets with favor among those, such as digital rights activists, who argue that Section 230 should remain a part of the U.S. legal system; it would have allowed greater freedom of expression online, fostering the ability to connect and communicate in ways that were not previously possible. It would also allow online platforms to remove offensive content without necessarily censoring freedom of expression.[25]
However, the U.S. Courts’ stance upholding the neutrality of online platforms is difficult to endorse, precisely in light of the known functioning of user preference analysis algorithms.
Indeed, these cases again demonstrated the importance of striking a balance between freedom of expression and protecting users online. Platforms have the power to reach millions of people around the world, but with that power also comes the responsibility to make sure that the content posted on them does not harm users-or at least to make concrete efforts to do so.
In addition, the widespread perception of the need for greater regulation and supervision of online platforms has further emerged. While Section 230 has long provided functional immunity to the development of the Internet, concerns remain about the safety of users and about the dissemination of content that poses dangers both online and offline.
The role and choices of providers are becoming increasingly central, especially in light of the operation of the algorithms they rely on.
Therefore, what is needed-before and far beyond a judicial pronouncement-is a regulatory intervention that addresses the problem with the critical gaze of our decade and can meet the (often) conflicting demands of security and freedom of expression.
Prof. Avv. Roberto De Vita
Avv. Marco Della Bruna
References
[1] Title V of the broader Telecommunications Act of 1996, which amended Title 47 of the United States Code.
[2] “Obscene and indecent.”
[3] “No provider or user of an interactive information service shall be treated as the publisher or spokesperson for information provided by another information content provider.” [traduzione a cura dell’autore].
[4] https://www.cornellpress.cornell.edu/book/9781501714412/the-twenty-six-words-that-created-the-internet/
[5] “No provider or user of an interactive computer service shall be held liable on account of-
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph.”
[6] https://www.eff.org/it/deeplinks/2021/04/content-moderation-losing-battle-infrastructure-companies-should-refuse-join-fight
[7] https://law.justia.com/cases/federal/appellate-courts/ca2/18-397/18-397-2019-07-31.html
[8] https://law.justia.com/cases/federal/appellate-courts/ca9/17-17351/17-17351-2019-09-12.html
[9] https://www.treccani.it/vocabolario/echo-chamber_(Neologisms)/
[10] https://www.supremecourt.gov/DocketPDF/19/19-859/127371/20200102175456524_ForcePetPDF.pdf
[11] “Mounting evidence suggests that providers designed their algorithms to drive users toward content and people the users agreed with – and that they have done it too well, nudging susceptible souls ever further down dark paths.”
[12] https://law.justia.com/cases/federal/appellate-courts/ca9/18-16700/18-16700-2021-06-22.html
[13] https://supreme.justia.com/cases/federal/us/598/21-1496/
[14] https://casetext.com/case/clayborn-v-twitter-inc
[15] Author’s translation.
[16] The defendant’s (in this case Google’s) motion to dismiss the claims of the litigant.
[17] https://law.justia.com/cases/federal/appellate-courts/ca9/18-16700/18-16700-2021-06-22.html
[18]https://www.repubblica.it/esteri/2017/01/01/news/istanbul_attacco_armato_ad_un_night_club_vittime_e_feriti-155199540/
[19] The Ninth Circuit Court used an original but effective example: “Assume that a person on one side of a crowded football stadium fires a high-powered rifle at a group of people on the other side of the stadium whose identities are unknown. Would the majority here affirm that the rifle shot hitting an unidentified bystander on the other side of the stadium has no direct relationship to the shooter and that the shot probably did not cause the resulting death?” [traduzione dell’autore]
Similar allegations, moreover, are at the heart of the case Clayborn v. Twitter, which originates from the 2015 San Bernardino massacre, https://casetext.com/case/clayborn-v-twitter-inc
[20] As last novated in 2016, https://www.congress.gov/bill/114th-congress/senate-bill/2040/text
[21] Quote attributed to Judge Kavanaugh, https://rollcall.com/2023/02/21/supreme-court-cautious-during-arguments-on-internet-liability-law/
[22] https://slate.com/technology/2021/03/section-230-reform-legislative-tracker.html
[23] “[…] we think it sufficient to acknowledge that much (if not all) of plaintiffs’ complaint seems to fail under either our decision in Twitter or the Ninth Circuit’s unchallenged holdings below. We therefore decline to address the application of §230 to a complaint that appears to state little, if any, plausible claim for relief.”
[24] https://law.justia.com/cases/federal/appellate-courts/ca9/18-16700/18-16700-2021-06-22.html
[25] https://www.eff.org/it/deeplinks/2020/12/section-230-good-actually
Leave a Reply