ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

Grammarly's Failed 'Expert Review' Feature Highlights AI Attribution and Likeness Crisis

Grammarly Expert Review AI agents Superhuman AI ethics Privacy rights
April 05, 2026
Source: The Verge AI
Viqus Verdict Logo Viqus Verdict Logo 7
Ethical Wake-Up Call, Not Tech Milestone
Media Hype 6/10
Real Impact 7/10

Article Summary

The article analyzes the backlash faced by Grammarly (now part of Superhuman) over its 'Expert Review' feature, which generated AI writing suggestions attributed to famous experts and academics. Initially, the feature was subtle, referencing figures like Stephen King and Neil deGrasse Tyson. However, the rollout went awry when the feature began using the names and likenesses of internal employees and prominent journalists without consent. Public criticism quickly escalated, with the company issuing vague apologies. Ultimately, the feature was disabled following significant negative press and the filing of a class-action lawsuit alleging privacy and publicity rights violations. The incident has fueled an industry conversation about the ethical boundaries of generative AI, specifically concerning unauthorized attribution, the right of publicity, and the use of real people's voices and professional identities in AI-generated content.

Key Points

  • Grammarly’s use of unauthorized likenesses in its 'Expert Review' feature triggered significant backlash from both users and industry experts.
  • The controversy highlights a critical, unaddressed legal gap in current AI capabilities regarding the right of publicity and deepfake-style attribution.
  • The failure forced Grammarly to disable the feature, sparking a class-action lawsuit and setting the stage for Superhuman to potentially rebuild the model with explicit expert consent and control.

Why It Matters

This is not merely a PR mishap; it represents a significant confrontation between the rapid development of AI and established legal/ethical boundaries. Professionals must pay attention because the debate over 'attribution' versus 'representation' (i.e., fabricating advice under a real person's name) will define consumer-facing AI tools for years. Any major AI company attempting to commercialize expert knowledge will now face heightened scrutiny and legal risk, potentially forcing a shift toward more regulated, permission-based models for AI agents.

You might also be interested in