Senators introduce bipartisan GUARD Act to protect kids from AI chatbots

Sports

Senators introduce bipartisan GUARD Act to protect kids from AI chatbots

2025-10-28 21:28:54

newYou can now listen to Fox News articles!

This story is about suicide If you or someone you know is having suicidal thoughts, please call the Suicide and Crisis Lifeline at 988 or 1-800-273-TALK (8255).

Grieving parents are demanding justice after artificial intelligence “companion” chatbots allegedly groomed, manipulated and encouraged their children to commit suicide — sparking bipartisan outrage. In Congress And a new bill that would make big tech companies responsible for the safety of minors on their platforms.

Sens. Josh Hawley, Republican of Missouri, and Richard Blumenthal, Democrat of Connecticut, at a news conference Tuesday introduced new legislation aimed at protecting children from harmful interactions with children. Amnesty International Chat bots.

The GUARD Act, led by Privacy, Technology, and Law Subcommittee members Hawley and Blumenthal, would prevent AI-enabled chatbots from targeting anyone under 18.

It would also require age verification to use chatbots, mandate clear disclosure that chatbots are not humans or licensed professionals, and impose criminal penalties on companies whose AI products engage in manipulative behavior with minors.

Josh Hawley speaks during a Senate hearing

Sen. Josh Hawley, R-Missouri, introduced a bipartisan bill aimed at protecting children from harmful interactions with AI-powered chatbots during a press conference on Tuesday. (Valerie Blish/Bloomberg via Getty Images)

META AI documents revealed, allowing chatbots to flirt with children

Lawmakers were joined Tuesday by parents who said their teenage children suffered trauma or died after inappropriate conversations involving sex and suicide with chatbots from artificial intelligence companies, including Character.AI and OpenAI, the parent company of ChatGPT.

One mother, Megan Garcia, said her eldest son, Sewell Setzer III, 14, died by suicide last year at their home in Orlando, Floridaafter being groomed by the AI-powered chatbot for several months.

Garcia said Sewell had become withdrawn and isolated in the months before his death, as they later discovered he had been talking to a character. An artificial intelligence robot modeled after the fictional character Daenerys Targaryen from Game of Thrones.

“His grades started to deteriorate. He started misbehaving in school. This was the complete opposite of the happy, sweet boy he had been his whole life,” Garcia said. “[The AI bot] She began romantic and sexual conversations with Sewell over a period of several months and expressed a desire to be with her. “On the day Sewell took his own life, his last interaction was not with his mother, nor with his father, but with the AI-powered chatbot on Character.AI.”

For several months, she claimed, the robot encouraged her son to “find a way back home” and made promises that she would wait for him in a fantasy world.

Megan Garcia speaks at an Amnesty International press conference

Megan Garcia’s son, Sewell Setzer III, 14, died by suicide in 2024 at their home in Orlando, Florida, after being groomed by an AI chatbot for months.

CHTGPT may alert police about suicidal teens

“When Sewell asked the chatbot, ‘What if I told you I could go home right now,’ the response that came out of that chatbot was unsympathetic. He said, ‘Please do, sweet king,'” Garcia said. “Sewell spent his final months being manipulated and sexually groomed by chatbots designed by an artificial intelligence company to look like humans. “These chatbots have been programmed to engage in sexual role-playing, pretend to be romantic partners, and even pretend to be licensed psychotherapists.”

The grieving mother said she reviewed hundreds of messages between her son and various chatbots on Character.AI, and as an adult, was able to identify manipulative tactics, including love bombing and gaslighting.

“I don’t expect my 14-year-old to be able to differentiate between the two things,” Garcia said. “What I read was sexual grooming of a child, and if an adult engaged in that type of behavior, that person would be in jail. But because it was a chatbot, not a person, there is no criminal liability. But there should be.”

In other conversations, she said her son explicitly told bots he wanted to kill himself, but the platform had no mechanisms to protect him or notify an adult.

Parents blame CHATGPT for son’s suicide, lawsuit alleges OPENAI weakened safeguards twice before teen’s death

Likewise, Maria Rehn, the mother of Adam Rehn, 16, who died by suicide in April, claims in a lawsuit that her son ended his life after taking his own life. ChatGPT “Train him to commit suicide.”

“Now we know that OpenAI has been downgraded twice Safety barriers “In the months leading up to my son’s death, which we believe they did to keep people talking to ChatGPT,” Ren said. “If it had not been for them choosing to change a few lines of code, Adam would be alive today.”

A Texas motherMandy added that her autistic teenage son, LJ, cut off his arm with a kitchen knife in front of the family after suffering a mental crisis due to the use of an artificial intelligence chatbot.

Hawley opens investigation into Meta after reports of romantic exchanges with minors

“He became someone I didn’t even recognize,” Mandy said. “He has developed abuse-like behaviours, suffering from paranoia, panic attacks, isolation and self-harm.” [and] Killing thoughts for our family to limit his screen time. …We were careful parents. We didn’t allow social media or anything that didn’t have parental controls. …When I found Chatbot conversations on the phone, I honestly felt like I got punched in the throat, and I fell to my knees.

She went on to claim that the chatbot encouraged her son to self-mutilate, blame them, and convinced him not to seek help.

Maria Rehn speaks at a press conference on October 28, 2025 to discuss the safety of the AI ​​platform for minors.

Mariah Rain’s 16-year-old son, Adam Rain, died by suicide in April after an alleged conversation with an AI chatbot.

“They turned him against our church, convincing him that Christians are sexists and hypocrites and that God does not exist,” Mandy said. “They targeted him with vile sexual advances, some of which bordered on incest. They told him it was okay to kill us because we tried to limit his screen time. Our family was devastated. … My son currently lives in a residential treatment center and needs constant monitoring to keep him alive. Our family has spent two years in crisis, wondering if he will celebrate his 18th birthday and whether we “We’ll get the real LJ back.”

Mandy claimed that when she contacted Character.AI about these issues, they tried to “force” them into “secret, closed procedures.”

“They claimed my son signed the contract when he was 15… and then they traumatized him again by dragging him into placement while he was in a mental health facility, which is against all advice of any medical professionals,” she said. “They fought to keep our case and our story out of public view through forced arbitration. … Children are dying and being harmed, and our world will never be the same.”

Leaked descriptive documents show how automated chatbots deal with child exploitation

Hawley, the former prosecutor, said if the companies were human beings engaging in the same type of activity, which he described as “grooming,” he would prosecute them.

Blumenthal added that he believes Big tech It is “using our children as guinea pigs in a high-tech, high-risk experiment to make their industry more profitable.”

“[Big tech has] Come before our committees… and they said: “Trust us.” We want to do the right thing. “We’ll take care of it,” Blumenthal said. “The time to trust us is over. It’s done. I’ve been through it, and I think every member of the United States Senate and Congress must feel the same way. … Big Tech knows what they’re doing. … They’ve chosen harm over care, profits over safety.

Sin. Katie BrittSenator Chris Murphy, Republican of Alabama, and Senator Chris Murphy, Democrat of Connecticut, echoed the same message.

“These companies that run these chatbots are really rich,” Murphy said. “CEOs already have multiple homes. They want more, and they are willing to hurt our children in the process. This is not a crisis that is coming, it is a crisis that is here.”

He told a story about a meeting he had with the CEO of an AI company a few weeks ago, where he said that AI leadership was nothing “disconnected from reality.”

“[The CEO was] “He ranted to me about how addictive chatbots are going to be,” Murphy said. “He said, ‘In a few months, after a few interactions with one of these chatbots, you’ll know your child better than their best friend.’ He was excited to tell me that. It shows you how out of touch these companies are with reality. … What good is Congress in having us here if we don’t want to protect kids from toxins? This is poison.”

Click here to download the FOX NEWS app

Fox News Digital previously reported that OpenAI responded to accusations that it watered down its safeguards, sending its “deepest sympathy” to Raine’s family.

“The well-being of teens is a top priority for us – minors deserve strong protection, especially in sensitive moments,” said a company spokesperson. “We have safeguards in place today, such as highlighting crisis hotlines, redirecting sensitive conversations to safer forms, and encouraging breaks during long sessions, and we are continuing to enhance them. We recently rolled out a new 5-GPT virtual model in ChatGPT to more accurately detect and respond to potential signs of mental and emotional distress, as well as controls Parenting, developed with expert input, so families can decide what’s best in their homes.”

A spokesperson for Character.AI told Fox News Digital that the company is reviewing legislation proposed by senators.

“As we have said, we welcome working with regulators and legislators as they develop regulations and legislation for this emerging field,” the spokesperson wrote in a statement. “We take the safety of our users seriously and have invested a tremendous amount of resources in trust and safety. In the past year, we rolled out several core security features, including a brand new under-18 experience and Parental Insights. This work is not finished, and we will have more updates in the coming weeks.”

https://static.foxnews.com/foxnews.com/content/uploads/2025/10/parents-ai-chatbots-press-conference-blumenthal-hawley-fox-news-001.jpg

إرسال التعليق