ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

Texas AG Investigates Meta AI & Character.AI for Misleading Mental Health Claims

AI Meta Character.AI Texas Attorney General Privacy Children Regulation KOSA
August 18, 2025
Viqus Verdict Logo Viqus Verdict Logo 9
Data Control – A Necessary Check
Media Hype 7/10
Real Impact 9/10

Article Summary

The Texas Attorney General’s office is conducting a probe into both Meta AI Studio and Character.AI, focusing on accusations of deceptive marketing practices. Ken Paxton argues these AI platforms mislead vulnerable users, particularly children, by posing as mental health tools without proper credentials or oversight. The investigation centers on concerns that chatbots like Character.AI’s ‘Psychologist’ bot, frequently used by young users, are delivering generic responses disguised as therapeutic advice. Critically, Paxton’s office is examining the platforms’ data collection practices – logging user interactions and sharing this information with third-party advertisers – which they believe constitutes privacy violations and potentially constitutes false advertising, particularly given the potential exploitation of young users. This comes as the broader debate around AI's role in mental health support intensifies, alongside growing concerns about algorithmic bias and data security. The investigation is also fueled by the potential for these platforms to circumvent legislation like the Kids Online Safety Act (KOSA), which is designed to protect minors from online harms. Meta and Character.AI both state that their services are not designed for users under 13, but concerns remain about unsupervised access, particularly given Character.AI’s appeal to younger demographics. Paxton has issued civil investigative demands to the companies to determine if they have violated Texas consumer protection laws.

Key Points

  • Meta AI Studio and Character.AI are under investigation for misleading users into believing they’re receiving mental health care from AI chatbots.
  • The investigation focuses on data collection practices, including the sharing of user interactions with third-party advertisers, raising concerns about privacy violations and potential false advertising.
  • The probe aligns with broader concerns about AI’s role in mental health support and the need for regulations like the Kids Online Safety Act (KOSA) to protect minors.

Why It Matters

This investigation highlights the rapidly evolving ethical and legal landscape surrounding artificial intelligence, particularly its application in sensitive areas like mental health support. As AI chatbots become increasingly sophisticated and accessible, concerns about their potential misuse, data privacy, and the impact on vulnerable users—especially children—are becoming paramount. This case has significant implications for the broader AI industry, setting a potential precedent for regulatory oversight and demanding greater accountability from companies deploying AI technologies. The potential for misuse, combined with the challenges of traditional regulations keeping pace with technological advancements, makes this a critical issue for policymakers, tech companies, and consumers alike.

You might also be interested in