Table of Contents
Same old playbook, new target: AI chatbots

Shutterstock.com
Chatbots are already transforming how people access information, express themselves, and connect with others. From to , these tools are becoming an everyday part of digital life. But as their use grows, so does the urgency to protect the First Amendment rights of both developers and users.
That’s because some state lawmakers are pursuing a familiar regulatory approach: requiring things like blanket age verification, rigid time limits, and mandated lockouts on use. But like other means of digital communication, the development and use of chatbots have First Amendment protection, so any efforts to regulate them must carefully navigate significant constitutional considerations.
Prompting a chatbot involves ... the user choosing words to communicate ideas, seek information, or express thoughts. That act of communication is protected under the First Amendment, even when software generates the specific response.
Take New York’s , which would make every user, including adults, verify their age before chatting, and would fine chatbot providers when a “misleading” or “harmful” reply “results in” any kind of demonstrable harm to the user. This is, in effect, a breathtakingly broad “misinformation” bill that would permit the government to punish speech it deems false — or true but subjectively harmful — whenever it can point to a supposed injury. This is inconsistent with the First Amendment, which precludes the government from regulating chatbot speech it thinks is misleading or harmful — just as it does with any other expression.
S 5668 would also require that certain companion bots be shut down for 24 hours whenever expressions of potential self-harm are detected, complementing that requires companion chatbots to include protocols to detect and address expressions of self-harm and direct users to crisis services. Both the bill and the new law also require chatbots to remind users that they are AI and not a human being.
Sound familiar? States like California, Utah, Arkansas, Florida, and Texas all attempted similar regulatory measures targeting another digital speech technology: social media. Those efforts have resulted in , repeals, , and blocked implementation because they violated the First Amendment rights of the platforms and users.
New York is just one of a few states that have introduced similar chatbot legislation. Minnesota’s requires age verification while flatly banning anyone under age 18 from “recreational” chatbots. California’s targets undefined “rewarding” chat features, leaving developers to guess what speech is off-limits and pressuring them to censor conversations.
As we’ve said before, the First Amendment doesn’t evaporate when the speaker’s words depend on computer code. From the printing press to the internet, and now AI, each leap in expressive technology remains under its protective umbrella.
This is not because the machine itself has rights; rather, it’s protected by the rights of the developer who created the chatbot and of who create the prompts. Just like asking a question in a search engine or posting on social media and the responses they generate, prompting a chatbot involves a developer’s expressive design and the user choosing words to communicate ideas, seek information, or express thoughts. That act of communication is protected under the First Amendment, even when software generates the specific response.
FIRE will keep speaking out against these bills, which show a growing pattern of government overreach into First Amendment rights when it comes to digital speech.
Recent Articles
FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

FIREhighlights artistic freedom with launch of new YouTube interview series featuring heavy metal and punk’s biggest stars

Americans worry about AI in politics — but they’re more worried about government censorship

After Michigan State trustees told students to call professor a racist, his lawsuit is moving ahead
