Table of Contents
Why we shouldn’t let the government hit mute on AI speech

Shutterstock.com
AI speech is speech, and the government shouldn’t get to rewrite it. But across the country, officials are pressuring AI developers to bend outputs to their political preferences.
That danger isn’t theoretical. In July, Missouri’s (now former) Attorney General Andrew Bailey sent OpenAI a threatening to investigate the company. In it, Bailey accused their AI chatbot ChatGPT of partisan bias after it President Donald Trump lowest among recent presidents on anti-Semitism. Calling the answer “objectively” wrong, Bailey’s letter cites Trump’s relocation of the U.S. embassy to Jerusalem, the Abraham Accords, and his Jewish family ties as proof the ranking defies “objective facts.”
Although no lawsuit was filed, the looming threat no doubt put considerable pressure on the company to revise its outputs — a preview into how common and far-reaching such tactics will become if courts ever say, as some critics of AI are arguing, that AI speech isn’t explicitly protected by the Constitution.
Lawsuits against — another chatbot geared more towards companionship and casual conversation — such as Garcia v. Character Technologies, Inc., show that judges are already being asked to decide whether AI outputs are speech or something else entirely. If courts adopt the view that AI isn’t protected by the First Amendment, nothing would stop government officials from just mandating outputs rather than applying pressure. That’s why FIREfiled an amicus curiae “fԻ-Ǵ-ٳ-dzܰ”&Բ;brief in this litigation to emphasize that the First Amendment shields this expressive technology.
Free expression shouldn’t rise and fall with the party in power, forcing AI engineers to reshape their models to fit every new political climate.
The First Amendment’s protections don’t vanish simply because artificial intelligence is involved. AI is another medium (or tool) for expression. The engineers behind it and the users who prompt it are practicing their craft in much the same way writers, directors, and journalists are practicing theirs. So when officials pressure AI developers to alter or delete outputs, they’re censoring their speech.
By framing ChatGPT’s ranking as “consumer misrepresentation,” Bailey tried to turn protected political speech into grounds for a fraud investigation. Instead of using consumer protection laws for their intended purpose — to, for example, investigate faulty toasters or false advertising — Bailey’s gambit bends them into mechanisms for censoring AI-generated speech. The letter signals to every developer that just one politically sensitive answer could yield a government investigation.
The irony here is striking: Bailey represented the state of Missouri in , the high-profile lawsuit accusing the Biden administration of jawboning social-media platforms into removing COVID-19 content. In that case, Bailey argued the federal government’s nudging violated the First Amendment because it coerced private actors to police speech the government couldn’t ban outright.

Voters want AI political speech protected – and lawmakers should listen
New polling shows voters fear AI — but fear government censorship more. As lawmakers push new rules, are they protecting elections or silencing speech?
Government pressure is already reshaping AI in other ways. OpenAI’s now warns that your ChatGPT conversations may be scanned, reviewed, and possibly reported to the police. This means users are faced with a choice of whether to risk a visit from law enforcement or forgo the benefits these AI tools offer. Absent robust First Amendment safeguards, the result is government censorship (including jawboning) on one side, and surveillance on the other. Both narrow the space for open inquiry that AI ought to expand.
FIRE’s answer is for the government to first apply the First Amendment appropriately to AI speech, and then improve government transparency to ensure the government is doing so. Our Social Media Administrative Reporting Transparency (“SMART”) Act would require federal officials to disclose their communications with an interactive computer service (like a chatbot) about moderating content. This way users, developers, and the public can see when officials try to influence what AI says. Similar state-level reforms could ensure that no government coercion occurs in the dark.
Free expression shouldn’t rise and fall with the party in power, forcing AI engineers to reshape their models to fit every new political climate. If we want AI to widen the marketplace of ideas, strong First Amendment protections are the place to start.
Recent Articles
Get the latest free speech news and analysis from ֭.

The United Kingdom needs a new generation of Levellers

Three takeaways from Harvard’s victory over the Trump administration’s funding freeze

FIREstatement on ruling that Trump’s funding freeze for Harvard was unlawful
