֭

Table of Contents

Did Grok break the law?

Reports that it generated nudes of real people raise questions about the safety of AI
Grok AI chatbot logo seen on smartphone screen

Shutterstock.com

Grok, the AI system integrated into X, has  been used to turn real pictures of people — including minors — into nude or sexualized imagery.

People are understandably outraged. This episode shows how a person can use AI tools to violate a sense of human dignity and security with little more than a photo and a prompt. The fact that the tool was used to target real people, especially children, without their knowledge or consent is particularly disturbing to many. 

Some have responded by calling for new laws. That instinct is understandable. But many proposals would raise serious First Amendment concerns, and before trying to scratch the “do something” itch with new legislation, it’s important to first ask: does existing law already prohibit this? 

In many cases, the answer is yes.

Federal criminal  prohibits knowingly making or sharing child sexual abuse material involving actual children, whether it is created by a camera or with the assistance of AI. Likewise, AI-generated material that meets the high bar for obscenity and is publicly created or distributed, is not protected speech. Users who knowingly prompt an AI system to create such content, or who share it, can already face criminal prosecution.  don’t protect anyone from federal criminal prosecutions. AI operators that knowingly provide substantial assistance to those creating this unlawful content may face legal exposure as well.

Existing law also provides other avenues to hold people accountable through private lawsuits. Civil claims for harms like intentional infliction of emotional distress, invasion of privacy, defamation, and misappropriation of likeness may also be available to people depicted in the images created by Grok, provided the elements of those torts, and any constitutional protections built into them, are satisfied. These types of claims allow victims to collect monetary damages against users who make, share, or sell such content and, in limited cases, developers.

At the same time, it’s important to be clear about the limits of the law. The law will never be able to fully prevent bad actors from doing bad things. And the Constitution limits how far the government can go in trying. Nudity and sexual content involving adults are generally protected by the First Amendment unless they fall into a narrow category of unprotected speech. Use of AI does not change that constitutional analysis. This means a great deal of offensive or distasteful expression remains protected speech, even when it disturbs or makes us uncomfortable.

This matters. If every technological failure becomes an excuse to expand government authority over speech, the predictable outcome is overreach that chills expression and silences voices. 

Public pressure, reputational risk, and the possibility of lawsuits are powerful incentives to motivate xAI, the parent company of both Grok and X, to improve safeguards, redesign systems, and limit misuse. That is the preferred path. Editorial and design decisions made by private companies are far less dangerous than granting the government broad power to regulate speech and assume control over platforms protected by the First Amendment.

Using Grok’s failures as a justification for sweeping new AI speech regulations would be a mistake. Existing laws already target real harms and real actors. Broad new rules risk overreach, chilling lawful expression and empowering the state in ways that are difficult to unwind.

The right response here starts with enforcing the law we already have, and to resist the temptation to trade constitutional principles for the illusion of control.

Recent Articles

Get the latest free speech news and analysis from ֭.

Share