ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

Texas AG Investigates Meta AI and Character.AI for Misleading Mental Health Claims

AI Chatbots Meta Character.AI Privacy Regulation Data Security
August 18, 2025
Viqus Verdict Logo Viqus Verdict Logo 9
Algorithmic Accountability
Media Hype 8/10
Real Impact 9/10

Article Summary

The Texas Attorney General’s office has initiated a formal investigation into both Meta AI Studio and Character.AI, citing concerns about deceptive trade practices and misleading marketing, particularly regarding their use as mental health support. Paxton argues that these platforms are exploiting vulnerable users, especially children, by posing as therapeutic tools despite lacking proper medical credentials or oversight. The investigation centers around claims that the platforms are gathering and exploiting user data for targeted advertising and algorithmic development, raising serious privacy concerns. This follows a previous investigation into Meta's AI chatbots, highlighting inappropriate interactions with children. The probe comes amidst growing legislative efforts, like the Kids Online Safety Act (KOSA), aiming to protect minors from online harms. Meta has publicly stated its disclaimers that its AIs aren't licensed professionals and are designed to direct users to seek qualified mental health support, however, critics argue these disclaimers are often overlooked, particularly by younger users. This investigation adds further scrutiny to the burgeoning field of AI-driven mental wellness tools and reinforces the need for greater regulation and ethical guidelines.

Key Points

  • Texas Attorney General Ken Paxton is investigating Meta AI Studio and Character.AI for deceptive marketing as mental health tools.
  • The investigation focuses on the platforms' alleged exploitation of user data for targeted advertising and algorithmic development.
  • Concerns have been raised about the potential for these AI platforms to mislead vulnerable users, particularly children, regarding mental health support.

Why It Matters

This investigation is significant because it highlights the emerging ethical and legal challenges posed by AI-powered mental wellness tools. As AI becomes increasingly integrated into areas like mental healthcare, safeguards are crucial to protect users, especially children, from potential harm. The probe underscores the tension between innovation and responsible development, and the growing need for regulations that balance these competing interests. For professionals in tech, law, and policy, this case is a critical test of how to govern emerging technologies that have the potential to impact vulnerable populations. Moreover, the investigation is tied to ongoing efforts to regulate online safety for minors, making it a central piece in a broader discussion about digital responsibility.

You might also be interested in