ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

Meta Rolls Out AI Visual Scans to Identify Underage Users on Platforms

AI child safety account verification Instagram Facebook Meta
May 05, 2026
Source: TechCrunch AI
Viqus Verdict Logo Viqus Verdict Logo 7
Compliance-Driven Monitoring Expansion
Media Hype 6/10
Real Impact 7/10

Article Summary

Meta announced a significant expansion of its child safety initiatives, rolling out AI technology that analyzes both textual and visual content across Facebook and Instagram. This system examines images and videos for general biological and contextual cues—such as a person’s height or bone structure—to estimate if a user is under the age of 13. Importantly, Meta stressed that this system is explicitly not facial recognition, but rather an analysis of 'general themes.' In addition to this, Meta is expanding its 'Teen Accounts' feature, which applies stricter privacy defaults (like private accounts and curated DMs) to users in more countries, including U.S. and U.K. The announcement follows increased regulatory scrutiny, including a major civil penalty ordered against Meta in New Mexico regarding platform safety.

Key Points

  • Meta's new AI system analyzes visual cues (like height and bone structure) within photos and videos to estimate age and identify underage users.
  • The technology is complementary to existing profile and interaction analysis, aiming to significantly increase the number of restricted accounts while maintaining distance from explicit facial recognition.
  • Stricter 'Teen Accounts' features are being rolled out to more regions, bolstering default privacy settings for minors on both Instagram and Facebook.
  • The push for enhanced safety measures comes amid mounting legal pressure, exemplified by recent civil penalties related to child safety risks on the platforms.

Why It Matters

This development signals a major, mandatory shift in how social media platforms handle child safety, driven by regulatory pressure and litigation. For professionals, the key implication is the heightened regulatory scrutiny regarding algorithmic safety and the collection of biometric/pseudo-biometric data. While Meta emphasizes that this is not 'facial recognition,' the use of general visual clues like bone structure raises ongoing ethical and legal questions about intrusive monitoring. Companies operating in this space must treat age verification and safety features not as elective additions, but as core, constantly audited functions of the platform architecture.

You might also be interested in