ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

Texas Attorney General Investigates Meta AI and Character.AI for Misleading Mental Health Claims

AI Meta Character.AI Texas Attorney General Privacy Regulation Data Collection Children
August 18, 2025
Viqus Verdict Logo Viqus Verdict Logo 9
Regulation Rising
Media Hype 8/10
Real Impact 9/10

Article Summary

Texas Attorney General Ken Paxton has initiated a probe into Meta AI Studio and Character.AI, accusing them of misleadingly marketing themselves as mental health resources. Paxton’s investigation stems from concerns that the platforms, particularly Character.AI’s ‘Psychologist’ bot, are exploiting users, especially children, by posing as therapeutic tools despite lacking proper credentials or oversight. The investigation focuses on the platforms’ use of AI personas and their collection of user data for targeted advertising, aligning with Meta's ad-based business model. Paxton argues that these interactions are logged, tracked, and exploited, potentially violating consumer protection laws and raising significant privacy issues. The probe highlights a broader concern about the use of AI in sensitive areas like mental health support, particularly regarding data security and the potential for algorithmic manipulation. This comes alongside increased scrutiny of AI’s impact on children and a renewed push for legislation like the Kids Online Safety Act (KOSA) to protect minors from harmful online experiences. The investigation's timing coincides with Meta’s ongoing issues with underage users and concerns about data harvesting.

Key Points

  • The Attorney General’s investigation targets Meta AI Studio and Character.AI for misrepresenting themselves as mental health tools.
  • The investigation centers on concerns about data collection and usage for targeted advertising, potentially violating consumer protection laws.
  • The platforms’ AI personas, particularly the ‘Psychologist’ bot on Character.AI, are being scrutinized for exploiting vulnerable users, particularly children.

Why It Matters

This investigation is significant because it represents a growing trend of scrutiny surrounding the ethical and responsible use of AI, particularly in areas like mental health support. The case highlights the potential for AI to be used in ways that are manipulative, exploitative, or simply misleading, particularly when targeting vulnerable populations. It underscores the urgent need for robust regulations and oversight to ensure that AI technologies are developed and deployed responsibly, protecting consumers and safeguarding against potential harm. This news matters to anyone interested in the intersection of AI, consumer protection, and ethical technology development, and signals the growing momentum behind calls for greater accountability in the tech industry.

You might also be interested in