Meta Rolls Out AI Visual Scans to Identify Underage Users on Platforms
7
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
The news scores highly on impact because it describes a genuine, structural change in platform moderation and regulatory compliance, even if the public hype level remains moderate.
Article Summary
Meta announced a significant expansion of its child safety initiatives, rolling out AI technology that analyzes both textual and visual content across Facebook and Instagram. This system examines images and videos for general biological and contextual cues—such as a person’s height or bone structure—to estimate if a user is under the age of 13. Importantly, Meta stressed that this system is explicitly not facial recognition, but rather an analysis of 'general themes.' In addition to this, Meta is expanding its 'Teen Accounts' feature, which applies stricter privacy defaults (like private accounts and curated DMs) to users in more countries, including U.S. and U.K. The announcement follows increased regulatory scrutiny, including a major civil penalty ordered against Meta in New Mexico regarding platform safety.Key Points
- Meta's new AI system analyzes visual cues (like height and bone structure) within photos and videos to estimate age and identify underage users.
- The technology is complementary to existing profile and interaction analysis, aiming to significantly increase the number of restricted accounts while maintaining distance from explicit facial recognition.
- Stricter 'Teen Accounts' features are being rolled out to more regions, bolstering default privacy settings for minors on both Instagram and Facebook.
- The push for enhanced safety measures comes amid mounting legal pressure, exemplified by recent civil penalties related to child safety risks on the platforms.

