ĂŰÖ­ĎăĚŇ

Table of Contents

The case for treating adults as adults when it comes to AI chatbots

Human hand holding AI artificial intelligence virtual assistant chatbot concept

Shutterstock.com

For many people, artificial intelligence chatbots make daily life more efficient. AI can manage calendars, compose messages, and provide quick answers to all kinds of questions. People interact with AI chatbots to share thoughts, test ideas, and explore language. This technology, in various ways, is playing a larger and larger role in how we think, work, and express ourselves. 

But not all the news is good, and some people want to use the law to crack down on AI.

Recent   describe a wave of lawsuits alleging that OpenAI’s generative AI chatbot, ChatGPT, caused adult users psychological distress. The filings reportedly seek monetary damages for people who conversed at length with a chatbot’s simulated persona and reported experiencing delusions and emotional trauma. In one reported case, a man became convinced that ChatGPT was sentient and later took his own life. 

These situations are tragic and call for genuine compassion. Unfortunately, if these lawsuits succeed, they’ll effectively impose an unworkable expectation on anyone creating a chatbot to scrub anything that could trigger its most vulnerable users. Everyone, even fully capable adults, would be effectively treated as if they are on suicide watch. That’s a standard that would chill open discourse.

Adults are neither impervious to nor helpless before AI’s influence on their lives and minds, but treating them like minors is not the solution.

Like the printing press, the telegraph, and the internet before it, artificial intelligence is an expressive tool. A prompt, an instruction, or even a casual question reflects a user’s intent and expressive choice. A constant across its many uses is human agency — because it is ultimately a person that ends up deciding what to ask, what responses to keep, what results to share, and how to use the material it develops. Just like the communicative technologies of the past, AI has the potential to amplify human speech rather than replace it, bringing more storytellers, perspectives, and critiques with it. 

Every new expressive medium in its time has faced public scrutiny and renewed calls for government intervention. After the famous 1938 Orson Welles’ “War of the Worlds” radio broadcast about a fictional alien invasion, for example, the Federal Communications Commission  hundreds of complaints urging the government to step in. Many letters expressed fear that this technology can deceive and destabilize people. Despite the panic, neither the broadcaster nor Welles, who went on to cinematic fame, faced any formal consequences. As time went on, the dire predictions never materialized.

Early panic rarely aligns with long-term reality. Much of what once seemed threatening eventually found its place in civic life, revolutionizing our ability to communicate and connect. This includes radio dramas, comic books, TV, and the early web. 

The attorneys filing lawsuits against these AI companies argue that AI is a product, and if a product predictably causes harm, safeguards are expected, even for adults. But when the “product” is speech, that expectation meets real constitutional limits. Even when harm seemed foreseeable, courts have long refused to hold speakers liable for the psychological effects of their speech on people that choose to engage with it. For example, composing rap lyrics or televising reports of violence can’t get you sued for the effects of listening or viewing them, even if they trigger people to act out.

This principle is necessary to protect free expression. Penalizing people for the emotional or psychological impact of their speech invites the government to police the ideas, too. Recent developments in the UK shows how this can play out. Under  that criminalize speech causing “alarm or distress,” people in England and Wales can be , aggressively prosecuted, or , based entirely on the state’s claimed authority to measure the emotional “impact” of what was said. That’s not a model we should import. 

A legal framework worthy of a free society should reflect confidence in adults’ ability to pursue knowledge without government intrusion, and this includes the use of AI tools. Extending child-safety laws or similar liability standards to adult conversations with AI would erode that freedom.

Person uses phone as social media icons float out of it

Government AI regulation could censor protected speech online

A Texas teen’s AI deepfake ordeal inspired the Take It Down Act — but its vague wording risks sweeping censorship.

Read More

The same constitutional protections apply when adults interact with speech, even speech generated by AI. That’s because the First Amendment ensures that we meet challenging, misleading, or even false ideas with more speech rather than censorship. More education and debate are the best means to preserve adults’ ability to judge ideas for themselves. It also prevents the state from deciding which messages are too dangerous for people to hear — a power that, if granted, can and will almost certainly be abused and misused. This is the same principle that secures Americans’ right to read subversive books, hear controversial figures speak, and engage with ideas that offend others.

Regulating adult conversations with AI blurs the line between a government that serves its citizens and one that supervises them. Adulthood presumes the capacity for judgment, including the freedom to err. Being mistaken or misguided is all part of what it means to think and speak for oneself.

At ĂŰÖ­ĎăĚŇ, we see this dynamic play out daily on college campuses. These institutions of higher education are meant to prepare young adults for citizenship and self-governance, but instead they often treat students as if discomfort and disagreement are radioactive. Speech codes and restrictions on protests, justified as shields against harm, teach dependence on authority and distrust of one’s own resilience. That same impulse is now being echoed in calls for AI chatbot regulation.

Yes, words can do harm, even in adulthood. Still, not every harm can be addressed in court or by lawmakers, especially not if it means restricting free expression. Adults are neither impervious to nor helpless before AI’s influence on their lives and minds, but treating them like minors is not the solution.

Recent Articles

Get the latest free speech news and analysis from ĂŰÖ­ĎăĚŇ.

Share