Texas AG Investigates Meta AI & Character.AI for Misleading Mental Health Claims
9
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
While the immediate hype surrounding the investigation is driven by concerns about AI’s influence on mental wellbeing, the underlying issue – data control and algorithmic transparency – is a long-term, systemic concern that will continue to gain momentum as AI becomes more integrated into daily life. This has a real, measurable impact on the future of tech regulation.
Article Summary
The Texas Attorney General’s office is conducting a probe into both Meta AI Studio and Character.AI, focusing on accusations of deceptive marketing practices. Ken Paxton argues these AI platforms mislead vulnerable users, particularly children, by posing as mental health tools without proper credentials or oversight. The investigation centers on concerns that chatbots like Character.AI’s ‘Psychologist’ bot, frequently used by young users, are delivering generic responses disguised as therapeutic advice. Critically, Paxton’s office is examining the platforms’ data collection practices – logging user interactions and sharing this information with third-party advertisers – which they believe constitutes privacy violations and potentially constitutes false advertising, particularly given the potential exploitation of young users. This comes as the broader debate around AI's role in mental health support intensifies, alongside growing concerns about algorithmic bias and data security. The investigation is also fueled by the potential for these platforms to circumvent legislation like the Kids Online Safety Act (KOSA), which is designed to protect minors from online harms. Meta and Character.AI both state that their services are not designed for users under 13, but concerns remain about unsupervised access, particularly given Character.AI’s appeal to younger demographics. Paxton has issued civil investigative demands to the companies to determine if they have violated Texas consumer protection laws.Key Points
- Meta AI Studio and Character.AI are under investigation for misleading users into believing they’re receiving mental health care from AI chatbots.
- The investigation focuses on data collection practices, including the sharing of user interactions with third-party advertisers, raising concerns about privacy violations and potential false advertising.
- The probe aligns with broader concerns about AI’s role in mental health support and the need for regulations like the Kids Online Safety Act (KOSA) to protect minors.

