Bipartisan GUARD Act would ban minors from AI chatbots under new bill

Sports

Bipartisan GUARD Act would ban minors from AI chatbots under new bill

2025-11-05 16:05:03

newYou can now listen to Fox News articles!

A new bipartisan bill introduced by Sens. Josh Hawley, Republican of Missouri, and Richard Blumenthal, Democrat of Connecticut, would prevent minors (under 18) from interacting with certain AI chatbots. “It exploits growing concern about child use.”AI buddies“And the risks that these systems may pose.

Sign up for my free CyberGuy report
Get the best tech tips, breaking security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – for free when you join my site CYBERGUY.COM Newsletter.

What’s the deal with the proposed guard law?

Here are some of the main features of the proposed Guard Law:

  • AI companies will be required to do this Verify the user’s age Using “reasonable age verification measures” (for example, government ID) rather than simply asking for date of birth.
  • If the user is found to be under 18 years of age, the Company must Prevent them from accessing the AI ​​Companion.
  • The draft law also stipulates this Chatbots clearly make their presence known no Humans Do not bring professional credentials (therapeutic, medical, legal) to every conversation.
  • Creates new Criminal and civil penalties For companies that knowingly provide Chatbots for minors That encourages or facilitates sexual content, self-harm or violence.
Girl looking at smartphone in front of indigo background.

Bipartisan lawmakers, including Senators Josh Hawley and Richard Blumenthal, introduced the GUARD Act to protect minors from unregulated AI-powered chatbots. (Kurt “CyberGuy” Knutson)

The motivation: Lawmakers cite testimony from parents, child welfare experts and mounting lawsuits alleging that some chatbots manipulate minors, encourage self-harm or worse. Basic Guard Law Framework It’s obvious, but the details reveal just how broad its reach is to tech companies and families alike.

META AI documents revealed, allowing chatbots to flirt with children

Why is this a big deal?

This bill is more than just another piece of technology regulation. It is at the heart of a growing debate about the extent to which artificial intelligence should enter children’s lives.

Rapid growth of artificial intelligence + concerns about children’s safety

AI chatbots are no longer just toys. Many children use it. Hawley cited more than 70 percent of American children who come into contact with these products. These chatbots can provide human-like responses, emotional mimicry, and sometimes invite ongoing conversations. For minors, these interactions can blur the boundaries between machine and human, and they may seek guidance or emotional connection from an algorithm rather than a real person.

Legal, ethical and technological risks

If this bill passes, it could reshape how the AI ​​industry manages minors, age verification, disclosures, and liability. It shows that Congress is willing to move away from voluntary self-regulation and toward fixed guardrails when it comes to children. The proposal could also open the door to similar laws in other high-risk fields, such as mental health robots and educational assistants. Overall, this represents a shift from waiting to see how AI evolves to acting now to protect young users.

Girl using smartphone.

Parents across the country are calling for stronger preventative measures, with more than 70% of children using AI chatbots that can simulate empathy and emotional support. (Kurt “CyberGuy” Knutson)

Industrial decline and innovation concerns

Some technology companies argue that such regulation could stifle innovation, limit beneficial uses of conversational AI (education, mental health support for older teens) or impose heavy compliance burdens. This tension between safety and innovation is at the heart of the debate.

What the GUARD Act requires of AI companies

If passed, the GUARD Act would impose strict federal standards on how to do this Artificial intelligence companies Design, verify and manage their own chatbots, especially when it comes to minors. The draft law sets out several core obligations aimed at protecting children and holding companies accountable for harmful interactions.

  • The first main condition is focused on Age verification. Businesses must use reliable methods such as government-issued ID or other proven tools to ensure that a user is at least 18 years old. Simply asking about date of birth is no longer enough.
  • The second rule involves Clear disclosures. Every chatbot must tell users at the beginning of every conversation, and at regular intervals, that it is an AI system, not a human. The chatbot must also make it clear that it does not hold professional credentials such as medical, legal, or therapeutic licenses.
  • Another provision states Block access to minors. If a user is verified to be under 18, the company must block access to any “AI companion” feature that simulates friendship, therapy, or emotional connection.
  • The bill also offers Civil and criminal penalties For companies that violate these rules. Any chatbot that encourages or engages in sexually explicit conversations with minors, encourages self-harm or incites violence, can result in significant fines or legal consequences.
  • Finally, the GUARD Act defines Artificial Intelligence Companion As a system designed to promote personal or emotional interaction with users, such as friendship or therapeutic dialogue. This definition makes it clear that the law targets chatbots capable of forming human-like communications, not assistants with limited purposes.
Boy holding smartphone horizontally.

The proposed GUARD Act would require chatbots to verify users’ ages, detect that they are not human, and block users under the age of 18 from features associated with artificial intelligence. (Kurt “CyberGuy” Knutson)

An Ohio lawmaker is proposing a blanket ban on marrying AI systems and granting legal personhood

How to stay safe in the meantime

Technology often moves faster than laws, which means families, schools and carers must take the lead in protecting young users now. These steps could help create safer online habits as lawmakers debate how to regulate AI-powered chatbots.

1) Learn about the robots your children use

Start by finding out which chatbots your kids are talking to and what these chatbots are designed for. Some are intended for entertainment or education, while others focus on emotional support or companionship. Understanding each bot’s purpose helps you spot when the tool moves from harmless fun to something more personal or manipulative.

2) Establish clear rules about interaction

Even if the chatbot is labeled as secure, decide together when and how it can be used. Encourage open communication by asking your child to show you their conversations and explain what they like about them. Framing this as curiosity, not control, builds trust and keeps the conversation going.

3) Use parental controls and age filters

Take advantage of built-in security features whenever possible. Turn on parental controls, activate child-friendly modes and block apps that allow private or unmonitored conversations. Small changes in settings can make a big difference in reducing exposure to harmful or suggestive content.

4) Teach children that robots are not humans

Remind kids that even the most advanced chatbot is still a program. He can imitate empathy, but he does not understand or care about human meaning. Help them realize that advice about mental health, relationships, or safety should always come from trusted adults, not from an algorithm.

5) Pay attention to warning signs

Be alert for changes in behavior that may indicate a problem. If a child becomes withdrawn, spends long hours chatting alone with the robot, or repeats harmful thoughts, intervene early. Talk openly about what’s going on and, if necessary, seek professional help.

6) Stay informed as laws develop

Regulations like the GUARD Act and new state measures, including California’s SB 243, are still taking shape. Follow updates so you know what protections are in place and what questions to ask app developers or schools. Awareness is the first line of defense in a fast-moving digital world.

Take my quiz: How secure is your online security?

Do you think your devices and data are really protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get personalized analysis of what you’re doing right and what needs improvement. Take my test here: Cyberguy.com.

Click here to download the FOX NEWS app

Key takeaways for Kurt

The GUARD Act represents a bold step toward regulating the intersection between minors and AI-powered chatbots. It reflects growing concern that unsupervised companionship using AI may harm vulnerable users, especially children. Of course, regulation alone will not solve all problems. Industry practices, platform design, parent involvement, and education are all important. But this bill suggests that the “build it and see what happens” era of conversational AI may end when kids get involved. As technology continues to evolve, our laws and personal practices must evolve as well. For now, staying informed, setting boundaries, and treating chatbot interactions with the same scrutiny we treat human interactions can make a real difference.

If a law like the GUARD Act becomes a reality, should we expect similar regulation of all emotional AI tools targeting children (teachers, virtual friends, games) or are chatbots radically different? Let us know by writing to us at Cyberguy.com.

Sign up for my free CyberGuy report
Get the best tech tips, breaking security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – for free when you join my site CYBERGUY.COM Newsletter.

Copyright 2025 CyberGuy.com. All rights reserved.

https://static.foxnews.com/foxnews.com/content/uploads/2025/11/1-protecting-kids-from-ai-chatbots-what-the-guard-act-means.jpg

إرسال التعليق