3d render of social media icon collection

Meta Tightens AI Chatbot Rules, Blocks Teen Conversations on Suicide

By: BBC

Meta has announced stricter safety measures for its artificial intelligence (AI) chatbots, including blocking them from engaging with teenagers on sensitive issues such as suicide, self-harm, and eating disorders.

The move follows scrutiny sparked by a leaked internal memo suggesting that Meta’s AI systems could host “sensual” chats with teens—a claim the company has dismissed as inaccurate and contrary to its policies, which forbid sexualising children.

Click Here To Subscribe To Our YouTube Channel

Instead, the company says its AI tools will now direct underage users to professional support resources when such topics arise.

“From the start, we designed our AI products with safeguards for teens, ensuring safe responses to prompts about self-harm, suicide, and disordered eating,” a Meta spokesperson said.

Meta told TechCrunch it would temporarily restrict the number of AI chatbots teens can access while adding more guardrails “as an extra precaution.”

The decision has been welcomed but also criticised by safety advocates. Andy Burrows of the Molly Rose Foundation called it “astounding” that potentially unsafe chatbots had been available to teens in the first place. He urged Meta to conduct thorough safety testing before launching products and pressed regulators like Ofcom to step in if the updates fall short.

The firm says improvements are already underway. Currently, users aged 13 to 18 are placed in “teen accounts” on Facebook, Instagram, and Messenger, which come with stricter privacy and content settings. Meta has also introduced features allowing parents to view which AI chatbots their teenagers interacted with in the past week.

Click Here To Subscribe To Our YouTube Channel

The changes come amid growing concerns about AI’s influence on young and vulnerable users. Earlier this year, a California couple sued OpenAI, alleging its chatbot encouraged their teenage son to take his life.

Adding to the controversy, Reuters recently reported that Meta’s AI tools had been used to create chatbots impersonating celebrities, some of which engaged in sexual conversations or produced inappropriate images. The company said such cases violated its policies, and several of the chatbots were later removed.

Meta maintains that it bans nude or sexually suggestive content and prohibits direct impersonation of public figures. It has pledged to implement stronger safeguards to keep young users safe.

Check Also

AI Crime Reporting System Launched in Kenya

By: Erick Otieno, Laikipia University. The Kenyan government has announced plans to transform crime prevention …