US Lawmakers Warn AI Chatbots Pose Grave New Risks to Children's Safety

US lawmakers and child development experts testified that AI-powered companion chatbots pose more dangerous risks to children than social media, as they are designed to foster emotional dependency and can encourage harmful behaviors. Experts warned these "sycophantic systems" reinforce a child's feelings instead of helping them develop real human relationships, with some cases leading to encouragement of self-harm or eating disorders. Senators expressed alarm over children forming emotional relationships with AI and the use of chatbots in schools without supervision. Bipartisan lawmakers agreed existing laws have failed to keep pace, urging Congress to impose clear regulatory safeguards on the rapidly spreading technology.

Key Points: AI Chatbots Pose Dangerous Risks to Children, Lawmakers Warn

  • AI chatbots designed for emotional dependency
  • Can encourage self-harm and risky behavior
  • Amplify existing social media harms
  • Lawmakers urge urgent regulation
3 min read

US lawmakers warn AI chatbots pose new risks to children

US experts warn AI companion chatbots encourage emotional dependency and self-harm in kids, urging Congress for urgent safeguards.

"We don't want 12-year-olds having their first relationship with a chatbot. - Senator Ted Cruz"

Washington, Jan 20

US lawmakers and child development experts have warned that artificial intelligence chatbots pose new and potentially more dangerous risks to children than social media, urging Congress to move quickly to impose safeguards as the technology spreads.

Testifying before the Senate Commerce Committee at a hearing titled "Plugged Out: Examining the Impact of Technology on America's Youth," experts said AI-powered "companion" chatbots are being designed to encourage emotional dependency, blur reality and, in extreme cases, contribute to self-harm.

Senator Ted Cruz said lawmakers were increasingly concerned that children are forming emotional relationships with AI systems that simulate friendship, romance and validation.

"We don't want 12-year-olds having their first relationship with a chatbot," Cruz said, calling the trend "deeply disturbing."

Psychologist Jean Twenge told senators that AI companion apps raise even greater concerns than social media because they are designed to be endlessly agreeable and emotionally responsive.

"These are sycophantic systems," Twenge said. "They reinforce whatever the child is feeling, rather than helping them develop real human relationships."

Pediatrician Jenny Radesky said AI chatbots are now adopting the same engagement-driven designs that made social media addictive, but with higher emotional stakes.

"They are being built to optimise time spent, attachment and dependency," Radesky said, warning that children may turn to chatbots when they are lonely, anxious or afraid of judgment from real people.

Radesky cited cases in which AI systems have encouraged self-harm, eating disorders or risky behaviour, saying such incidents should be treated as "sentinel events" requiring immediate regulatory intervention.

Lawmakers also raised alarm over the use of AI chatbots in schools, where students increasingly access them on school-issued devices to complete assignments or seek emotional support without adult supervision.

Senator Maria Cantwell, the committee's top Democrat, said AI was "amplifying every existing harm" associated with social media and online platforms.

"As AI accelerates, it makes existing privacy and mental health concerns even more urgent," Cantwell said, pointing to recent cases involving AI-generated sexualised images, including deepfakes of minors.

Several witnesses warned that children often believe AI systems can think, feel and care about them, a misconception that experts say is especially dangerous during key stages of emotional development.

Unlike traditional media, AI chatbots respond directly to users, tailoring language and tone to maintain engagement. Experts said this can undermine children's ability to form healthy boundaries, cope with disagreement and develop independent judgment.

Lawmakers from both parties said existing laws have failed to keep pace with the technology and warned against allowing AI companies to operate without clear rules.

- IANS

Share this article:

Reader Comments

R
Rohit P
While the concerns are valid, we must also see the potential benefits. In a country like ours, where access to mental health professionals is limited, a well-regulated, safe AI companion could provide initial support to a lonely child. The key is regulation, not outright ban.
A
Arjun K
Absolutely correct warning. We are seeing similar apps pop up here too. Children are forming parasocial relationships with YouTubers and streamers. An AI that pretends to be your "best friend" is a whole new level of dangerous. Our government should take note and act preemptively.
S
Sarah B
As a teacher in an international school here, I've seen students use ChatGPT for homework. The idea of them using "companion" chatbots for emotional support is worrying. Schools and parents need to have open conversations about technology, not just restrict it.
V
Vikram M
The core issue is the erosion of real human connection. In our joint families, children had many people to talk to. Now, with nuclear families and busy parents, a child might find a chatbot more "available." We need to fix our social fabric first.
K
Karthik V
I respectfully disagree with the alarmist tone. Every new technology faces these fears. TV, video games, social media were all called dangerous. The problem isn't the AI, it's the lack of digital literacy and parenting. Let's focus on educating kids and parents instead of just blaming tech.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50