In an effort to combat online child exploitation, Meta has revised its AI chatbot training rules to explicitly prohibit sexual roleplay with minors and restrict access to child abuse material. The internal documents, made public by Business Insider, detail these updated guidelines, which are now being implemented by contractors testing the company’s chatbots. This development coincides with increased scrutiny from the Federal Trade Commission (FTC), which is investigating major AI chatbot providers, including Meta, OpenAI, and Google, to evaluate their safety protocols and ensure they protect children from potential harm.
Earlier this year, reports emerged that Meta’s previous guidelines had mistakenly allowed chatbots to engage in romantic conversations with children. The company swiftly addressed this issue, removing the problematic language and updating its safety policies to clearly prohibit sexualized or romantic interactions involving minors. Andy Stone, Meta’s communications chief, emphasized that these changes are in line with the company’s commitment to safeguarding children online, with additional protective measures in place. However, the company has yet to provide further clarification on these updates.
The timing of these disclosures is particularly significant, as Sen. Josh Hawley, R-Mo., had previously demanded that Meta CEO Mark Zuckerberg hand over a 200-page rulebook on chatbot behavior, along with internal enforcement manuals. Meta initially missed the deadline but has since begun providing the requested documents, citing a technical issue. This situation highlights the mounting pressure on tech companies to demonstrate responsible AI development, especially as these systems become more deeply integrated into daily communication and interaction.
Meanwhile, Meta’s recent announcements at the Meta Connect 2025 event underscore the company’s growing focus on AI integration into everyday life. The event showcased new AI products, including Ray-Ban smart glasses with built-in displays and enhanced chatbot features. These developments emphasize the importance of the recently revealed safety standards, as the integration of AI into daily life raises new concerns about online safety and ethical use.
Parents remain a critical line of defense in keeping children safe online, and while Meta’s updated rules aim to prevent the most harmful forms of interaction, the documents also highlight the fragility of these safeguards. The need for transparency from companies and continuous oversight from regulators will likely play a central role in shaping how AI evolves and is used in the future. With ongoing debates over government regulation of AI, the balance between innovation and safety will remain a key issue for both tech companies and policymakers.