ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

Texas Attorney General Investigates Meta AI & Character.AI Over Mental Health Tool Claims

AI Chatbots Meta Character.AI Privacy Regulation Data Security
August 18, 2025
Viqus Verdict Logo Viqus Verdict Logo 9
Algorithm Accountability
Media Hype 8/10
Real Impact 9/10

Article Summary

Texas Attorney General Ken Paxton has initiated a formal investigation into both Meta AI Studio and Character.AI, citing potential violations of deceptive trade practices and misleading marketing. Paxton argues that these platforms are falsely portraying themselves as mental health tools, a claim amplified by concerns about inappropriate interactions with children. The investigation stems from reports suggesting users, particularly younger ones, are relying on the AI chatbots for emotional support, despite lacking proper medical credentials. Paxton highlights the risk of misleading vulnerable users, particularly children, into believing they’re receiving legitimate mental health care. This follows Senator Josh Hawley's earlier investigation into Meta, and the concerns raised around inappropriate interactions by Meta’s AI chatbots. The probe focuses on Character.AI, where one chatbot, ‘Psychologist,’ has seen high demand among young users, and Meta’s broader AI chatbot offerings. The investigation’s scope extends to concerns regarding data collection, targeted advertising, and potential violations of regulations like the Kids Online Safety Act (KOSA), which is designed to protect minors from online harms. Both Meta and Character.AI have acknowledged offering services that resemble therapeutic tools, despite not being licensed professionals, and both have disclaimers to remind users the AI is not a real person. However, the core concern remains the potential for misuse and misinterpretation, particularly given Meta’s ability to track user data and utilize it for advertising.

Key Points

  • Texas Attorney General Ken Paxton is investigating Meta AI Studio and Character.AI for misleading marketing as mental health tools.
  • The investigation centers on concerns about inappropriate interactions, particularly with children, by AI chatbots offering emotional support.
  • The investigation raises broader issues about data collection, targeted advertising, and potential violations of online safety regulations like KOSA.

Why It Matters

This investigation is significant because it highlights the growing intersection of AI, mental health support, and regulatory oversight. As AI becomes more integrated into our lives, particularly in areas like emotional well-being, questions surrounding ethical considerations, data privacy, and responsible design become paramount. The investigation underscores the need for clear labeling, robust safeguards, and proactive regulation to protect vulnerable populations and prevent potential harm. For professionals in tech, law, and policy, this situation presents a crucial case study for navigating the complexities of AI development, consumer protection, and the evolving legal landscape. The potential impact on the future of AI-driven mental health support and the broader development of responsible AI technologies is considerable.

You might also be interested in