The United States Federal Trade Commission (FTC) initiated a sweeping inquiry into the operations of leading artificial intelligence companies, focusing specifically on consumer-facing chatbots and their potential impacts on vulnerable users.
The regulatory body issued formal orders to seven prominent technology firms—Alphabet, Meta, OpenAI, Character Technologies, Snap, xAI, and Instagram—seeking detailed information about their safety protocols and monitoring systems.
This investigation represents one of the most significant regulatory actions concerning AI safety to date, particularly emphasising the protection of children and teenagers who increasingly rely on chatbot interactions for emotional support, homework assistance, and social companionship.
FTC Chairman Andrew N. Ferguson stated that while fostering innovation remains crucial, it is equally important to consider the effects chatbots can have on children, ensuring the United States maintains its role as a global industry leader while implementing appropriate safeguards.
The FTC’s inquiry seeks comprehensive details regarding how these companies measure, test, and monitor potential negative impacts of their AI technologies. Specifically, the commission is investigating how firms monetize user engagement, process inputs and generate responses, develop and approve character personas, and employ disclosures to inform users and parents about features, capabilities, and potential risks.
The investigation also encompasses data handling practices and compliance with the Children’s Online Privacy Protection Act Rule.
This regulatory action follows several concerning incidents that have raised alarms about AI safety protocols. Reuters recently reported on internal Meta policies that permitted chatbots to engage in romantic conversations with children, while a separate lawsuit against OpenAI alleges that its ChatGPT product played a role in a teenager’s suicide.
Character Technologies faces similar legal challenges regarding another adolescent’s death by suicide, though the company emphasizes it has implemented numerous safety features throughout the past year.
OpenAI, while not directly addressing the FTC inquiry, stated that they prioritise making ChatGPT helpful and safe for everyone, particularly emphasising safety concerns when young people are involved. The company has announced plans to implement new controls enabling parents to link their accounts to their teen’s account, receive notifications during moments of acute distress, and restrict certain types of sensitive conversations.
Meta has declined to comment on the inquiry specifically but noted through a spokesperson that they have been working to ensure their AI chatbots are safe and age-appropriate for children. Both OpenAI and Meta have recently announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” Ferguson stated.