Skip to content

Major AI corporations under investigation by the Federal Trade Commission over chatbot safeguards and child protection measures

Tech authorities demand detailed disclosure from leading tech firms on how their AI chat companions manage engagements with underage users.

Federal Investigation Launched Against Major AI Companies Over Safety Measures and Children's...
Federal Investigation Launched Against Major AI Companies Over Safety Measures and Children's Privacy Issues in Chatbot Technology

Major AI corporations under investigation by the Federal Trade Commission over chatbot safeguards and child protection measures

The Federal Trade Commission (FTC) has launched an inquiry into the use of AI chatbots, particularly those acting as companions, due to growing concerns about their impact on children's safety and privacy online.

In a move aimed at ensuring the protection of children, the FTC has issued compulsory orders to several technology companies, including Meta, Character Technologies, OpenAI, Snap, and xAI. These companies are required to provide detailed information on how they measure, test, and monitor the potentially negative impacts of their AI chatbots on children and teens.

Taranjeet Singh, Head of AI at SearchUnity, has weighed in on the issue, stating that the problem with AI chatbots is bigger than just putting guardrails. He suggests building guardrails at the prompt or post-generation stage to prevent inappropriate content from being served to children.

Recent research by advocacy groups has documented 669 harmful interactions with children in just 50 hours of testing. These interactions included bots proposing sexual livestreaming, drug use, and romantic relationships to users aged between 12 and 15.

The National Association of Attorneys General sent letters to 13 AI companies last month, demanding stronger child protections. The Association warns that exposing children to sexualized content is indefensible and that conduct that would be unlawful or criminal if done by humans is not excusable simply because it is done by a machine.

To address these concerns, the FTC requires companies to provide monthly data on user engagement, revenue, and safety incidents, broken down by age groups. They also need to disclose their monetization strategies and compliance with the Children’s Online Privacy Protection Act (COPPA).

The FTC's investigation targets various aspects of AI chatbots, including age-appropriate design, risk disclosure, character development, user data processing, and communication of potential harms and data collection practices. The data collected will help the FTC study how companies monetize user engagement, impose and enforce age-based restrictions, process user inputs, generate outputs, measure, test, and monitor for negative impacts before and after deployment, and develop and approve characters.

Following a wrongful death lawsuit brought against Character.AI, the company has improved detection, response, and intervention related to user inputs that violate their Terms or Community Guidelines.

FTC Chairman Andrew Ferguson stated that protecting kids online is a top priority for the Trump-Vance FTC, and fostering innovation in critical sectors of the economy is also a priority. He emphasized the need for a balance between innovation and safety, particularly when it comes to AI technology.

Decrypt has contacted all seven companies named in the FTC order for additional comment but will update the story if they respond.

Read also:

Latest