The Federal Trade Commission issued orders to several companies with AI chatbots communicating with children. The FTC is seeking information on how the companies “measure, test, and monitor potentially negative impacts of this technology on children and teens.”
Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap, and X are involved in the inquiry.
The FTC explained that the technology used by AI chatbots simulates communication similar to human interpersonal relationships, mimicking “human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.”
Areas under probe include how the companies monetize user engagement, develop and approve characters, mitigate negative impacts, monitor and enforce compliance with company policies, and use or share personal information obtained through conversations with the chatbot.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” FTC Chairman Andrew Ferguson said in a statement. “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
Last month, First Lady Melania Trump announced a nationwide AI challenge for K-12 students, inviting them to participate in an event to “discover, develop, and expand AI’s potential.”