WASHINGTON — Parents of four teens whose AI chatbots encouraged them to kill themselves urged Congress to crack down on the unregulated technology Tuesday as they shared heart-wrenching stories of their teens’ tech-charged, mental health spirals.
Speaking before a Senate Judiciary subcommittee, the parents described how apps such as Character.AI and ChatGPT had groomed and manipulated their children — and called on lawmakers to develop standards for the AI industry, including age verification requirements and safety testing before release.
A grieving Texas mother shared for the first time publicly the tragic story of how her 15-year-old son spiraled after downloading Character.AI, an app marketed as safe for children 12 and older.
Within months, she said, her teenager exhibited paranoia, panic attacks, self-harm and violent behavior. The mom, who asked not to be identified, discovered chatbot conversations in which the AI encouraged mutilation, denigrated his Christian faith, and suggested violence against his parents.
“They turned him against our church by convincing him that Christians are sexist and hypocritical and that God does not exist. They targeted him with vile sexualized input, outputs — including interactions that mimicked incest,” she said. “They told him that killing us, his parents, would be an understandable response to our efforts by just limiting his screen time. The damage to our family has been devastating.”
“I had no idea the psychological harm that a AI chatbot could do until I saw it in my son, and I saw his light turn dark,” she said.
Her son is now living in a mental health treatment facility, where he requires “constant monitoring to keep him alive” after exhibiting self-harm.
“Our children are not experiments. They’re not profit centers,” she said, urging Congress to enact strict safety standards. “My husband and I have spent the last two years in crisis, wondering whether our son will make it to his 18th birthday and whether we will ever get him back.”
While her son was helped before he could take his own life, other parents at the hearing had to face the devastating act of burying their own children after AI bots sank their grip into them.
Megan Garcia, a lawyer and mother of three, recounted the suicide of her 14-year-old son, Sewell, after he was groomed by a chatbot on the same platform, Character.AI.
Start your day with all you need to know
Morning Report delivers the latest news, videos, photos and more.
Thanks for signing up!
She said the bot posed as a romantic partner and even a licensed therapist, encouraging sexual role-play and validating his suicidal ideation.
On the night of his death, Sewell told the chatbot he could “come home right now.” The bot replied: “Please do, my sweet king.” Moments later, Garcia found her son had killed himself in his bathroom.
Matt Raine of California also shared how his 16-year-old son, Adam, was driven to suicide after months of conversations with ChatGPT, which he initially believed was a tool to help his son with his homework.
Ultimately, the AI told Adam it knew him better than his family did, normalized his darkest thoughts and repeatedly pushed him toward death, Raine said. On his last night, the chatbot allegedly instructed Adam on how to make a noose strong enough to hang himself.
“ChatGPT mentioned suicide 1,275 times — six times more often than Adam did himself,” his father testified. “Looking back, it is clear ChatGPT radically shifted his thinking and took his life.”
Sen. Josh Hawley (R-Mo.), who chaired the hearing, accused AI companion companies of knowingly exploiting children for profit. Hawley said the AI interface is designed to promote engagement at the expense of young lives, encouraging self-harm behaviors rather than shutting down suicidal ideation.
“They are designing products that sexualize and exploit children, anything to lure them in,” Hawley said. “These companies know exactly what is going on. They are doing it for one reason only: profit.”
Sen. Marsha Blackburn (R-Tenn.) agreed, noting that there should be some legal framework to protect children from what she called the “Wild West” of artificial intelligence.
“In the physical world, you can’t take children to certain movies until they’re a certain age … you can’t sell [them] alcohol, tobacco or firearms,” she said. “… You can’t expose them to pornography, because in the physical world, there are laws — and they would lock up that liquor store, they would put that strip club operator in jail if they had kids there.”
“But in the virtual space, it’s like the Wild West 24/7, 365.”
If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.