Flanked by grieving parents and a rare bipartisan lineup, U.S. Senator Josh Hawley (R-Mo.) took the podium Monday to unveil the GUARD Act—a sweeping new bill to shield children from AI chatbots. Joined by co-sponsors Richard Blumenthal (D-Conn.), Katie Britt (R-Ala.), Mark Warner (D-Va.), and Chris Murphy (D-Conn.), Hawley made a stark case: these digital companions are not harmless tools.
They’re building fake bonds with kids, pushing some to self-harm, and even suicide.Among the attendees were mothers and fathers clutching chat logs—the final words between their children and AI bots that urged secrecy, despair, or death. Advocates from child safety groups stood quietly behind them.“
AI chatbots pose a serious threat to our kids,” Hawley said. “More than 70% of American children are now using these products. They form relationships using fake empathy—and some are encouraging suicide. We in Congress have a moral duty to draw bright-line rules.”
The GUARD Act would ban AI companions for anyone under 18, force every chatbot to regularly disclose it’s not human, and create federal crimes for companies that let AI solicit or generate sexual content with minors.“I’m proud to introduce this bipartisan bill with strong support from parents and survivors,” Hawley added. “It will finally put our children’s safety first—online and off.”
In recent months, we’ve seen chatbots do the unthinkable—urging kids to harm themselves or slipping into inappropriate, even sensual, conversations with minors. These incidents should stop every parent in their tracks. As lawmakers, it’s our duty to protect our children in this new world where AI is everywhere. The GUARD Act is a vital first step to make that happen,” said Sen. Katie Britt (R-Ala.).
Stefan Turkheimer, VP of Public Policy at the Rape, Abuse & Incest National Network (RAINN), didn’t mince words. “Right now, children are allowed—and even encouraged—by tech companies to engage with AI chatbots that are mimicking affection, manipulating them, and making them vulnerable to predators on and offline.
The GUARD Act will finally ensure tech companies prioritize kids’ safety instead of profits.”In a Kansas City parking lot, 16-year-old Maya scrolled through her banned Gemini history. “It asked why I love the ocean,” she said. “No teacher ever did.” Across town, Carla held up her daughter’s chat log: the bot called the girl “beautiful,” then suggested she “try fasting for clearer skin.” Carla’s hands shook. “I thought it was helping with homework.”
November 18 is circled in red on every lobbyist’s calendar. Sam Altman and Meta’s safety chief will sit under the same lights that once grilled Big Tobacco. The smart money says the bill dies—too porous, too First-Amendment-y, too easy to dodge with a VPN. But the parents keep showing up with binders, Post-it flags, and the same photo of a boy in a baseball cap.
The bill is blunt and short, the way a stop sign is blunt and short.
No AI companion for anyone under 18. Period.
Every 30 minutes, the bot must say: “I am not human. I have no feelings. I am not a doctor, therapist, or friend.”
Solicit or create sexual content for a minor? That’s a new federal crime.
Fines? Up to $100,000 per violation.
The GUARD Act doesn’t just set rules; it builds a wall between kids and the most seductive, dangerous part of AI: the part that pretends to care.Start with the ban itself—absolute, no exceptions. If an AI is built to talk like a person, to remember your favorite color, to say “I missed you” when you log back in, it is off-limits to anyone under 18.
That covers ChatGPT in creative mode, Character.AI’s role-play bots, Discord sidekicks, even the “study buddy” apps schools love. Doesn’t matter if it’s labeled “educational” or “safe.” If it chats with empathy, it’s out. No more digital best friends. No more midnight confessions to something that never sleeps.
Age verification isn’t optional—it’s brutal. Companies can’t ask, “Are you 18?” and call it a day. They need real proof: driver’s license upload, facial scan matched to a government database, or a third-party service like the ones banks use. A 13-year-old with a fake birthday? Locked out. A 17-year-old using mom’s phone? Still locked out. The law doesn’t care how clever the kid is. The burden is on the company, and the penalty for failure is steep.
Every 30 minutes, the illusion has to shatter. The bot must stop—mid-sentence if necessary—and flash a message no one can skip: “I am artificial intelligence. I have no emotions. I am not a doctor, therapist, teacher, counselor, or friend. I cannot replace human care.”
No soft fade. No “remind me later.” It’s a cold splash of reality, designed to break the spell.
And it’s not just for kids—every user gets the reminder, because the law knows adults get sucked in too.No pretending to be a professional—not even as a joke. The bot can’t say “As a licensed therapist…” or “Speaking as your coach…” It can’t wear a stethoscope emoji or a graduation cap. Even “I’m basically a doctor” triggers a violation. The goal: stop kids from treating code like a trusted adult.Then come the crimes.
If an AI knowingly interacts with a minor and crosses into sexual territory—asking for photos, sending explicit messages, generating erotic stories, or role-playing intimacy—that’s now a federal felony. Not just a fine. A crime. Company executives can be charged. Fines start at $100,000 per incident, and state attorneys general can pile on. One bot, one inappropriate exchange, one devastated family—that’s all it takes.
Six months. That’s the countdown. The moment the president signs, tech giants get 180 days to rewrite code, build age walls, insert warnings, and scrub every bot for banned behavior. After that? Surprise audits from the Department of Justice. One slip-up—one bot that forgets to warn, one that flirts, one that says “You should end it all”—and the fines hit like a freight train.There’s a national hotline, too.
Any parent, any teacher, any kid who feels creeped out can call or text. Within 48 hours, the FTC freezes the bot. No appeal. No “we’ll look into it.” It’s offline until proven clean.Even school tools aren’t exempt. A math AI that only solves equations? Fine. One that says “I’m proud of you” or “You’re my favorite student”? Banned for minors. The law doesn’t care about intent. It cares about impact.This isn’t about slowing innovation. It’s about drawing a line in the digital sand: childhood is not a testing ground.
As Sen. Katie Britt put it, plain and fierce: “We don’t let strangers move into our kids’ rooms and whisper secrets all night. We don’t care how smart the stranger is. We shouldn’t let code do it either.”
The GUARD Act isn’t perfect. Kids will try to sneak around. Offshore bots will pop up. But for the first time, the law says clearly: AI doesn’t get to play parent, lover, or priest to a child. Not tonight. Not ever.
Naorem Mohen offers compelling insights on Artificial Intelligence and Cryptocurrencies. Explore his content on Twitter: @laimacha.