Table of Contents
You can’t eliminate real-world violence by suing over online speech
Mehaniq / Shutterstock.com
With so much of our national conversation taking place online, there’s an almost reflexive tendency to search for online causes — and online solutions — when tragedy strikes in the physical world. The murder of Charlie Kirk was no exception. Almost immediately, many (some in good faith, and ) began to postulate about the role played by online rhetoric and polarization.
Taking the stage at Utah Valley University to discuss political violence last week, Sens. Mark Kelly and John Curtis that social media platforms are fueling “radicalization” and violence through their content-recommendation algorithms. And they previewed their proposed solution: a bill that would strip platforms of Section 230 protections whenever their algorithms “amplify content that caused harm.”
This week, the senators unveiled the . In a nutshell, the bill would require social media platforms to “exercise reasonable care” to prevent their algorithms from contributing to foreseeable bodily injury or death, whether the user is the victim or the perpetrator. A platform that fails to do so would lose Section 230’s critical protection against being treated as the publisher of user-generated content — and injured parties could sue the platform for violating this “duty of care.”
The debate over algorithmic content recommendation has been going on for years. Lower almost Section 230 immunizes social media platforms from lawsuits claiming that algorithmic recommendation of harmful content contributed to terrorist attacks, mass shootings, and racist attacks. When faced with the question in 2023, the Supreme Court on the scope of Section 230 — opting instead to hold the claims of algorithmic aiding and abetting at issue would not survive either way.
Forcing social media platforms to do the dirty work of censorship on pain of expensive litigation and expansive liability is no less offensive to the First Amendment than a direct government speech regulation.
But there’s an important question that usually gets lost in the heated debate over Section 230: Would such lawsuits be viable even if they could be brought?
In a Wall Street Journal making the case for his bill, Sen. Curtis wrote, “We hold pharmaceutical companies accountable when their products cause injury. There is no reason Big Tech should be treated differently.”
At first blush, this argument has an instinctive appeal. But it ultimately dooms itself because there is a reason to treat social media platforms differently. That reason is the First Amendment, which enshrines a constitutional right to free speech — a protection not shared by prescription drugs.
Perhaps anticipating this point, Sen. Curtis that the Algorithm Accountability Act poses no threat to free speech: “Free speech means you can say what you want in the digital town square. Social-media companies host that town square, but algorithms rearrange it.” But free speech doesn’t only protect ܲ’ right to post online free of government censorship; it also protects the editorial decisions of those that host those posts — including algorithmic “rearranging,” to use the senator’s phrase. As the Supreme Court recently affirmed in Moody v. NetChoice:
When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices. And because that is true, they receive First Amendment protection.
The “rearranging” of speech is just as protected as the speech itself, as when a newspaper decides which stories to print on the front page and which letters to the editor to publish. That is no less true for social media platforms. In fact, the term “content-recommendation algorithm” itself points to its expressive nature. Recommending something is a message — “I think you would find this interesting.”
The Moody Court also the expressive nature of arranging online content (emphasis added): “Deciding on the third-party speech that will be included in or excluded from a compilation — and then organizing and presenting the included items — is expressive activity of its own.” Similarly, while dismissing exactly the kind of case the Algorithm Accountability Act would enable, the U.S. Court of Appeals for the Fourth Circuit this past February: “Facebook’s decision[s] to recommend certain third-party content to specific users . . . are traditional editorial functions of publishers, notwithstanding the various methods they use in performing” them.
The NO FAKES Act is a real threat to free expression
In Congress, the “NO FAKES” bill claims to promise deepfake fixes, but their restrictions on expression would chill news, history, art, and everyday speech.
So the First Amendment is at least implicated when Congress institutes “accountability” for a platform’s arrangement and presentation of user-generated content, unlike with pharmaceutical safety regulations. But does it prohibit Congress from imposing the kind of liability the Algorithm Accountability Act creates?
Yes. Two well-established principles explain why.
First: As the Supreme Court has , imposing civil liability for protected speech raises serious First Amendment concerns.
Second: Except for the exceedingly narrow category of incitement — where the speaker intended to spur imminent unlawful action by saying something that was likely to cause such action — the First Amendment demands that we hold the wrongdoer accountable for their own conduct, not the people whose words they may have encountered along the way.
The U.S. Court of Appeals for the Fifth Circuit why these principles preclude liability for “negligently” conveying “harmful” ideas:
If the shield of the first amendment can be eliminated by providing after publication that an article discussing a dangerous idea negligently helped bring about a real injury simply because the idea can be identified as ‘bad,’ all free speech becomes threatened.
In other words, faced with a broad, unmeetable duty to anticipate and prevent ideas from causing harm, media would be chilled into publishing, broadcasting, or distributing only the safest and most anodyne material to avoid the risk of unpredictable liability.
For this reason, courts have — for nearly a century — steadfastly refused to impose a duty of care to prevent harms from speech. A few noteworthy examples are illustrative:
- Dismissing a lawsuit alleging that CBS’ television programming desensitized a child to violence and led him to shoot and kill his elderly neighbor, one federal court of the duty of care sought by the plaintiffs:
The impositions pregnant in such a standard are awesome to consider . . . Indeed, it is implicit in the plaintiffs’ demand for a new duty standard, that such a claim should exist for an untoward reaction on the part of any ‘susceptible’ person. The imposition of such a generally undefined and undefinable duty would be an unconstitutional exercise by this Court in any event.
- In a case brought by the victim of a gruesome attack alleging that NBC knew of studies on child violence putting them on notice that some viewers might imitate violence portrayed on screen, the court :
[T]he chilling effect of permitting negligence actions for a television broadcast is obvious. . . . The deterrent effect of subjecting [them] to negligence liability because of their programming choices would lead to self-censorship which would dampen the vigor and limit the variety of public debate.
- Affirming dismissal of a lawsuit alleging that Ozzy Osbourne’s Suicide Solution caused a minor to kill himself, the court such liability would cause:
[I]t is simply not acceptable to a free and democratic society to impose a duty upon performing artists to limit and restrict the dissemination of ideas in artistic speech which may adversely affect emotionally troubled individuals. Such a burden would quickly have the effect of reducing and limiting artistic expression to only the broadest standard of taste and acceptance and the lowest level of offense, provocation and controversy.
- When the family of a teacher killed in a school shooting sued makers and distributors of violent video games and movies, the court of the suit:
Given the First Amendment values at stake, the magnitude of the burden that Plaintiffs seek to impose on the Video Game and Movie Defendants is daunting. Furthermore, the practical consequences of such liability are unworkable. Plaintiffs would essentially obligate these Defendants, indeed all speakers, to anticipate and prevent the idiosyncratic, violent reactions of unidentified, vulnerable individuals to their creative works.
In his op-ed, Sen. Curtis wrote, “The problem isn’t what users say, but how algorithms shape and weaponize it.” But the “problem” this bill seeks to remedy very much is what users say. A content recommendation algorithm in isolation can’t cause any harm; it’s the recommendation of certain kinds of content (e.g., radicalizing, polarizing, etc.) that the bill seeks to stymie.
And that content is overwhelmingly protected by the First Amendment, regardless of whether the posts might, individually or in the aggregate, cause an individual to commit violence. When the City of Indianapolis created remedies for people who viewed pornography, the U.S. Court of Appeals for the Seventh Circuit that pornography “perpetuate[s] subordination” and leads to cognizable societal and personal harms:
[T]his simply demonstrates the power of pornography as speech. All of these unhappy effects depend on mental intermediation. Pornography affects how people see the world, their fellows, and social relations. If pornography is what pornography does, so is other speech.
[ . . . ]
Racial bigotry, anti-semitism, violence on television, reporters' biases — these and many more influence the culture and shape our socialization. None is directly answerable by more speech, unless that speech too finds its place in the popular culture. Yet all is protected as speech, however insidious. Any other answer leaves the government in control of all of the institutions of culture, the great censor and director of which thoughts are good for us.
And that’s why the Algorithm Accountability Act also threatens ܲ’ expressive rights. There’s simply no reliable way to predict whether any given post might, somewhere down the line, factor into someone else’s independent decision to commit violence — especially at the scale of modern social media. Faced with liability for guessing wrong, platforms will effectively have two realistic choices: aggressively re-engineer their algorithms to bury anything that could possibly be deemed divisive (and therefore risky), or — far more likely — simply ban all such content entirely. Either road leads to the same place: a shrunken public square where whole neighborhoods of protected speech have been bulldozed.
“What a State may not constitutionally bring about by means of a criminal statute,” the Supreme Court famously wrote in New York Times v. Sullivan, “is likewise beyond the reach of its civil law.” Forcing social media platforms to do the dirty work of censorship on pain of expensive litigation and expansive liability is no less offensive to the First Amendment than a direct government speech regulation.
Political violence is a real and pressing problem. But history has already taught us that trying to scrub away every potential downstream harm of speech is a dead end. And a system of free speech requires us to abstain from the temptation of trying in the first place.
Recent Articles
Get the latest free speech news and analysis from ֭.
The case for treating adults as adults when it comes to AI chatbots
FIREwarnings confirmed again
LAWSUIT: New Jersey school board member silenced for asking constituents about a proposed tax increase