Grammarly's AI ‘Expert Reviews’ Leak Staff Members' Content Without Permission
8
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
High media buzz surrounding a routine product feature reveals a deeper crisis: the AI industry’s increasingly reckless disregard for user consent and data ownership – a fundamental shift in how AI models are trained and deployed, demanding immediate regulatory scrutiny.
Article Summary
Grammarly’s latest AI feature, ‘Expert Review,’ has hit a major snag due to a significant privacy breach. The feature, designed to offer writing suggestions ‘inspired by’ industry experts, has been found to be incorporating content – including names and specific suggestions – from The Verge’s editorial staff, including senior editors and the editor-in-chief. This discovery raises serious concerns about data privacy, consent, and the potential for AI to misrepresent individuals. The Verge’s internal review found that the AI was drawing on the published works of staff members, some of whom were unaware their content was being used. The feature’s mechanism—analyzing writing and surfacing AI-generated suggestions based on expert voices—is fundamentally undermined by the lack of consent. Beyond the immediate privacy violation, the issue highlights the challenges of controlling and verifying AI-generated content, particularly when relying on publicly available sources. The incident underscores the need for greater transparency and accountability in the development and deployment of AI tools.Key Points
- Grammarly’s ‘Expert Review’ AI feature is using The Verge staff’s content without permission.
- The AI incorporated names and specific writing suggestions from Verge editors, including the editor-in-chief.
- The Verge discovered that the AI was drawing on staff members’ published works, some of whom were unaware.

