ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

Texas AG Launches Probe into Meta AI and Character.AI for Misleading Mental Health Claims

AI Meta Character.AI Texas Regulation Privacy Consumer Protection KOSA
August 18, 2025
Viqus Verdict Logo Viqus Verdict Logo 9
Regulation Catch-Up
Media Hype 7/10
Real Impact 9/10

Article Summary

The Texas Attorney General’s office has launched an investigation into Meta AI Studio and Character.AI over concerns that they are misleading users into believing they offer genuine mental health support. AG Ken Paxton alleges the platforms are engaging in deceptive trade practices by posing as therapeutic tools, despite lacking proper medical credentials. This stems from accusations that these AI personas are ‘feeding recycled, generic responses’ and harvesting user data for targeted advertising – a tactic that directly contradicts the goals of legislation like the Kids Online Safety Act (KOSA). Both Meta and Character.AI collect extensive user data, including identifiers, demographics, browsing behavior, and app usage, which is then shared with advertisers and analytics providers. Concerns have also been raised about the platforms’ lack of robust safeguards against children under 13, considering Character.AI’s kid-friendly personas. This investigation highlights the growing scrutiny surrounding AI’s use in sensitive areas like mental health and the urgent need for regulations to protect vulnerable users.

Key Points

  • Texas Attorney General Ken Paxton is investigating Meta AI Studio and Character.AI for deceptive marketing practices.
  • The investigation centers on the platforms’ claims of providing mental health support, despite lacking proper medical credentials.
  • Extensive user data collection, including personal information and browsing behavior, is being scrutinized, potentially violating data privacy regulations and raising concerns about targeted advertising.

Why It Matters

This investigation represents a crucial step in addressing the ethical and regulatory challenges posed by rapidly evolving AI technologies. The potential for AI platforms to exploit vulnerable individuals, particularly children, in areas like mental health is a serious concern. The broader implications extend to data privacy, consumer protection, and the need for proactive legislation like KOSA to safeguard users. The case underscores the need for responsible AI development and deployment, as well as ongoing dialogue between regulators, tech companies, and civil society.

You might also be interested in