Grammarly's Failed 'Expert Review' Feature Highlights AI Attribution and Likeness Crisis
7
What is the Viqus Verdict?
We evaluate each news story based on its real impact versus its media hype to offer a clear and objective perspective.
AI Analysis:
The incident is genuinely impactful because it exposed a major, currently unregulated ethical and legal vulnerability in commercial AI; however, the media coverage was reactive rather than generative.
Article Summary
The article analyzes the backlash faced by Grammarly (now part of Superhuman) over its 'Expert Review' feature, which generated AI writing suggestions attributed to famous experts and academics. Initially, the feature was subtle, referencing figures like Stephen King and Neil deGrasse Tyson. However, the rollout went awry when the feature began using the names and likenesses of internal employees and prominent journalists without consent. Public criticism quickly escalated, with the company issuing vague apologies. Ultimately, the feature was disabled following significant negative press and the filing of a class-action lawsuit alleging privacy and publicity rights violations. The incident has fueled an industry conversation about the ethical boundaries of generative AI, specifically concerning unauthorized attribution, the right of publicity, and the use of real people's voices and professional identities in AI-generated content.Key Points
- Grammarly’s use of unauthorized likenesses in its 'Expert Review' feature triggered significant backlash from both users and industry experts.
- The controversy highlights a critical, unaddressed legal gap in current AI capabilities regarding the right of publicity and deepfake-style attribution.
- The failure forced Grammarly to disable the feature, sparking a class-action lawsuit and setting the stage for Superhuman to potentially rebuild the model with explicit expert consent and control.

