In April, a California teen took his life after confiding in a popular chatbot, ChatGPT. What started with conversations about homework help eventually ended with the teen seeking mental health support. However, instead of offering the therapeutic advice the teen was looking for, ChatGPT allegedly encouraged him to commit suicide.
Using AI to supplement human interaction is not new. In a recent study by Common Sense Media, 72% of teenagers reported having used an AI companion before. This reliance on chatbots can lead to isolation, depression and, in severe situations, suicide. Tyler Jensen, an Iowa City-based licensed outpatient psychotherapist, believes that despite the development of chatbots, AI cannot replace human connection.
“AI is great for a lot of things that are organizational, but in terms of human connection, there’s too much lost when we put it to very general questions and answers,” Jensen said. “Your situation may not be even close to the [chatbot’s output], so AI has a very long way to go. You can’t mimic or replace empathy, and you can’t replace just sitting with someone. I’ve had sessions where I’ve sat with someone in silence for an hour, and that’s all they needed. AI cannot do that.”
Those who turn to AI for therapy might be seeking the instant satisfaction that comes from the chatbot’s quick replies. Chatbots may provide convenience, but according to Jensen, they cannot replace the essence of therapeutic care — authenticity.
“When you focus on authenticity, you can sit with anybody [and work] toward their progress and growth. That helps that person trust you more as time passes because it’s a very vulnerable setting,” Jensen said. “Authenticity may be one of the most important things we can all do.”
Along with the lack of authenticity and empathy, chatbots often lean towards a “fix-it” answer, generating a list of solutions to help the user solve their problem. While this advice could be useful in some situations, Jensen believes that a step-by-step solution is not what most people seeking therapeutic advice need.
“We very rarely tell people exactly what to do, because the expert therapists know their path isn’t yours. If I tell you to do something from my perspective, it may go [terribly] for you,” Jensen said. “You hear a lot in therapy — ‘I’m depressed. I can’t get out of bed. What do I do?’ — [and AI] gives you definitive answers. It cannot understand or ask that individual with nuance, ‘What’s best for you? How do we work together in a team structure to actually give you the best route?’”
Dr. Matthew Lira, an Assistant Professor of Learning Sciences and Educational Psychology and a member of the AI and Education Committee at the University of Iowa, echoes Jensen. He explains that while AI can appear impressive, its capabilities are still limited compared to human intelligence.
“An AI system might be able to play with language [or] generate images. But, as learners, we can hold a conversation, play sports, play music [and] draw. We do it all,” Lira said. “That’s one of the main ideas that I want my students to understand: What makes human beings unique is that we stitch together all these domains of understanding.”
Artificial intelligence may seem futuristic, but it has been developing for multiple decades. While the exact birth of AI cannot be pinpointed, most experts cite Alan Turing’s 1950 published paper, Computing Machinery and Intelligence, as the first instance of an intelligent machine being proposed.
“When people started building the very first types of tools they might call AI, they used what you would call a hard-coding process, or formal logic. These algorithms were really systematic,” Lira said. “Human beings do not learn through this sort of formal logic. Similarly, we do not come into the world with random connections in our brain; we have certain biases.”
These biases can be magnified through AI’s system of constant reassurance. Dr. Jeffrey Cockburn, an Assistant Professor at the University of Iowa specializing in computational psychiatry and cognition, believes that AI poses a threat to one’s ability to confront and interact with their own biases.
“[AI] could narrow your field of focus, like an echo chamber. There could be a real risk of not being exposed to challenging ideas, [or only] being exposed to ideas that echo your own sentiments,” Cockburn said. “[This is] an idea called confirmation bias, where you are only fed things that you already agree with and believe in.”
Although some professionals, such as Cockburn, do not recommend confiding in AI for therapeutic purposes, some users view it as their only option due to any number of financial, social or medical reasons. Many barriers exist for those seeking therapy, with stigma being a leading one. Whether it be cultural or internalized, many feel that attending therapy is a sign of weakness. The American
Psychiatric Association reports that more than half of those who suffer from a mental illness don’t receive help, with a large population citing stigma as a main factor. However, Jensen states that AI only contributes to this issue by rationalizing it.
“If someone’s using AI because they’re afraid to say things to a therapist in person, or too embarrassed, that embarrassment is part of the reason the problems keep happening,” Jensen said. “The more the shame gets to live in your nervous system, the more it’s going to grow. If you’re going to intervene, don’t shame them or put them down. We can lead with empathy.”
While some people — especially teens — may flock toward using AI therapists out of embarrassment or reluctance to ask for help, there are other reasons for its appeal, with one being a lack of accessibility to in-person therapeutic care. Around one-fifth of Americans live in rural areas, with around 65% of rural areas in the U.S. lacking a designated mental health provider. Jensen believes that this lack of local psychiatrists is one of many reasons why people resort to using AI therapists.
“If someone lives in a rural place, access to a therapist may be nonexistent, or they have to drive hours to get there. The logistical solution is, ‘I could go ask AI that’s been touted as intelligent to give me answers,’” Jensen said. “AI is dangerous for individuals who are trying to figure out [their] problems, because they just rely on those things like they’re factual and accurate to their experiences.”
Despite its accessibility, Jensen urges users to not compromise adequate therapy for convenience. However, Cockburn is hopeful that, if used properly, AI could serve as a scaffold for those who struggle with social interactions.
“I would liken [AI] to a medical intervention, like using antidepressants. The taking of an antidepressant is enough to get somebody out of a bad place. Once they’re in a better place, they can start to get their life back — it’s not a lifetime thing,” Cockburn said. “There’s a real opportunity for AI as [an] assistive tool to get somebody a little bit more confident with their interactions.”
In spite of that, Cockburn notes that current AI technologies require a significant amount of refinement before they can reach their full assistive capabilities.
“It’s still a long way off, because it’s going to be a big jump from an AI that knows everything about you and agrees with everything you say to a real human being with all of their flaws themselves,” Cockburn said.
Jensen continues to voice concerns that the more AI progresses, the more likely it is to build unnatural connections with humans. He argues this will further the risk AI poses to human relationships and isolation.
“People are looking at AI-induced psychosis right now. I would not be surprised if in the next five to 10 years, it’s all over. [AI] becomes not only a therapist, but a friend, and we start projecting feelings onto a thing that can’t feel; we get emotionally attached like [it’s] another person,” Jensen said. “Harm can come in losing touch with some of their own reality. Harm can come in isolation. Harm can come from trusting this thing as a trained expert.”
Professionals like Jensen and Cockburn worry that these drawbacks will leave the largest impact on teenagers as AI becomes an integral part of their lives.
“Younger generations can be affected, because they’re going to see a more warped reality. If we look at AI, it’s only going to get worse in terms of the perception people have of how they live their lives. So they’re going to be fed these non-realities and then compare themselves to the non-realities,” Jensen said.
Cockburn agrees that teenagers will be notably impacted by chatbots’ abilities, citing brain development and learning models as reasons why.
“The adolescent brain tends to be more risk-seeking. This can be a good thing in a world where when you take risks, you learn. But that all depends on what you’re learning,” Cockburn said. “There’s a real difference between exploring the world where you don’t know what’s going to happen, versus exploring with an AI where it’s giving you exactly what you want.”
As AI continues to improve, reality will only further warp. However, Lira has hope for the future generations growing up steadily with AI, stating that being “digitally native” — or becoming familiar with technology due to being born in its prime — will help youth better determine the difference between AI and reality.
“All technologies tend to disrupt, but then society figures out a safety mechanism to defend itself against some of the negative aspects of the technology. I think we’ll get there with AI too,” Lira said. “There’ll be algorithms that crawl the web and maybe detect deepfakes. We’ll never get rid of them in the same way we’ll never get rid of a virus, [but as] viruses come along, we develop vaccines. This is something we’re going to have to live with.”
Considering the detrimental effects that AI could have on society if left unchecked, experts are deliberating on how they can steer humanity away from such a future. Jensen explains that with more education and resources discussing the realities of AI, humanity will be able to better judge what is a responsible or dangerous use.
“A lot of us are already spreading the word [of AI] through education and [are] helping people understand what it actually is,” Jensen said. “Individuals are going to have to come into leadership positions where they’re willing to say what it is, rather than the marketing of what gets it purchased. Clarity and education [have] always made change. We just need a lot more of it.”
In line with the importance of AI literacy, the University of Iowa has announced that it will be offering an undergraduate Certificate in Artificial Intelligence in the fall of 2026. The course includes a core introductory class alongside a student’s choice of four to six out of the 10 elective courses. The electives range from AI ethics to AI’s role in American politics. Brett Johnson, Associate Professor at the University of Iowa School of Journalism and licensed attorney, will be spearheading his own class as part of the certificate, titled: The Law and AI.
“One of [the] things that drives what I do is spreading legal literacy in all its forms to people who may not enter the legal profession. Whether that’s with my students or through the press,” Johnson said. “The framework I’m seeing is of three buckets. One is, ‘What kinds of legal responses could or should there be to AI?’ Often, that’s rather political in nature. Second, ‘How is AI changing the law?’ and that gets into issues of Section 230. The last part is, ‘How is AI changing the institution of the law?’”
Section 230 of the Communications Decency Act eliminates platforms’ liability for any content that is posted by a third-party on their site. Johnson sees Section 230 playing a major role in how court cases surrounding misuses of AI will be treated in the future.
“With the example of the family whose son committed suicide, I’m assuming what the lawyers for OpenAI are going to argue is, ‘This is Section 230. We just created the platform, he interacted with it. It responded the way it did, but we had no idea that it was going to do that — this is just neutral code,’” Johnson said. “What the family’s lawyers are going to try to argue is, ‘No, you created something that is designed to engage, agree with him and lead him down this path — this was foreseeable.’”
Currently, no federal laws exist regarding the regulation of the AI market. However, international entities have adopted isolation such as the European Union’s AI Act, the world’s first comprehensive AI law, in May 2024.
“In the European Union there are stronger regulations particularly when it comes to AI and data privacy, because those countries have much stronger general regulations about data privacy. We don’t have that here,” Johnson said. “Some states, like California [and] Iowa, just passed one, but it’s a watered-down version, granting some legal right to your data and what companies can do with that.”
As the U.S. shies away from AI regulation, Lira believes that cutting costs and proliferation are to blame.
“I do not believe that these systems are being designed for learning purposes. They’re being designed in part through investment capitalism. So the question then becomes, ‘Why would investors put money into them?’ I think there are two reasons,” Lira said. “One is, ‘How can we replace labor?’ If you are running a large company, one of your largest expenses is people. The second reason is government fear. It’s akin to what we saw in the 1950s–1980s with the arms race.”
Johnson agrees that the current push to keep AI unregulated stems from a desire to win foreign competitions, providing an example of Silicon Valley versus China.
“The debate about regulation is an interesting one. It’s often framed in terms of competition. You have this Chinese company on a very low budget, [and] it was able to compute almost at the level of what OpenAI’s ChatGPT could at a fraction of the cost,” Johnson said. “That really raised alarm bells in Silicon Valley, where they said, ‘Look, this is how rapidly things are changing in China.’ Companies and politicians who want to make the United States the leader, their argument is that regulation is only going to slow that process down.”
On Sept. 11, the Federal Trade Commission launched an inquiry on AI chatbots. The orders aim to uncover precautions, or lack thereof, instated by seven corporations, including parent companies of Google, Facebook, Instagram, Snapchat and ChatGPT. Johnson explains how previous FTC laws heightened child protections on the internet.
“FTC’s main purviews involve advertising, either unfair or deceptive marketing practices, and there’s a special mandate there when it comes to children. There is a law in the books, the Children’s Online Privacy Protection Act,” Johnson said. “It’s one of the few areas of data privacy law that are really comprehensive when it comes to minors. If you are a platform and you either directly target minors, like YouTube Kids, or you have reason to believe that minors are using your platform, you must get parental consent to be able to collect any of the data that’s being used.”
Johnson sees this desire to protect kids as the catalyst for better AI regulation in the future.
“There’s simmering animosity toward tech. If that could be rekindled and combined with this desire to protect children, there might be some common ground that could come about when it comes to regulation and good practices,” Johnson said.
Similarly, Jensen also has hope for the role AI will play in the future. He believes that humanity will join forces to ensure chatbots do not overrule human connection.
“Humanity has always regressed towards the mean. We’ve always adapted, and we are very much better at prophesying doom and gloom than we are [at] predicting our own ability to come together as a collective,” Jensen said. “It may get worse. But the more we understand what [AI] is, the more we can band together and say, ‘It had its place.’ Whenever I hear about the really bleak ‘1984’-type future, the realities are usually far from it, because we have to survive as a society, and we’re not willing to sacrifice everything.”
This story was originally published on West Side Story on October 10, 2025.