A proposed class action lawsuit filed in federal court is putting Grammarly and its parent company, Superhuman, under fresh scrutiny over an AI feature that allegedly used the names and identities of journalists, authors, and other public figures without consent. The case centers on Grammarly’s “Expert Review” tool, which generated writing feedback framed through the perspective of named experts. The lawsuit arrives just as the company says it has disabled the feature and plans to rethink how it represents experts in future products.
Lawsuit Targets Grammarly’s “Expert Review” Feature
The complaint was filed on March 11, 2026, in the U.S. District Court for the Southern District of New York. According to WIRED, investigative journalist Julia Angwin is the only named plaintiff so far, but the suit seeks to represent a broader class of people whose names and identities were allegedly used in Grammarly’s product without permission. The complaint argues that damages across the proposed class exceed $5 million, though it does not specify an exact amount sought from the court.
At the center of the dispute is Grammarly’s “Expert Review” tool, an AI-powered feature that surfaced editing suggestions as if they reflected the perspective of established writers, journalists, academics, and other subject-matter experts. Grammarly’s own support materials described the tool as one that “identifies relevant subject-matter experts” based on a user’s text and then suggests edits from those perspectives. The company also stated in product documentation that references to experts were “for informational purposes only” and did not indicate affiliation or endorsement.
Even with that disclaimer, the lawsuit contends that the commercial use of real people’s names inside a paid or monetized AI product crossed a legal line. WIRED reported that the complaint alleges Grammarly “misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors” in order to generate profit for Grammarly and Superhuman.
Grammarly Allegedly ‘Misappropriated’ Names of Journalists, Says Class Action Suit
The legal theory behind the case is significant because it does not focus only on copyright or AI training data. Instead, it centers on name, identity, and likeness rights. According to WIRED’s reporting, Angwin’s attorney, Peter Romer-Friedman, argues that laws in New York and California prohibit the commercial use of a person’s name and likeness without consent. The complaint also alleges that Grammarly attributed words and advice to people who never gave them.
That distinction matters in the broader AI debate. Many lawsuits against AI companies have focused on whether models were trained on copyrighted material. This case instead raises a different question: can a company market AI-generated guidance by attaching it to the identity of a real journalist or author, even with a disclaimer saying the person did not endorse the feature? The answer could shape how AI companies design products that simulate expertise or style.
The public backlash appears to have been swift. A recent report cited by The Overspill and summarized by WIRED said the feature referenced a wide range of recognizable media and publishing figures, including journalists and editors from major outlets. That breadth may strengthen the plaintiffs’ argument that the issue was not isolated to one or two names but part of a broader product design choice.
Company Response and Product Changes
Grammarly’s parent company has already moved to shut the feature down. In a statement to WIRED, Ailian Gan, Superhuman’s director for product management, said the company had decided to disable Expert Review after feedback and would “reimagine the feature” to make it more useful while giving experts “real control” over how they are represented, or whether they are represented at all. Gan also said the company had “clearly missed the mark” and apologized.
Superhuman CEO Shishir Mehrotra also acknowledged criticism publicly. According to WIRED, Mehrotra wrote on LinkedIn that the company had received valid critical feedback from experts concerned that the tool misrepresented their voices. He said the scrutiny would help improve the product and that the company took the criticism seriously. WIRED reported that Superhuman did not immediately comment on the lawsuit itself beyond those broader statements about the feature.
Grammarly’s support documentation, captured before or around the shutdown, showed that Expert Review had been available in Superhuman Go and in Grammarly’s “docs” writing surface. The feature was available to Pro and Plus subscribers, while free users had a more limited version with up to five suggestions per day in docs. That commercial context may become important as the case proceeds, because the lawsuit challenges the use of names in a revenue-generating product environment rather than in a purely experimental setting.
Why the Case Matters for AI Companies
The lawsuit lands at a time when AI companies are under pressure to show that their products are both useful and trustworthy. Tools that mimic expert judgment can be attractive to users because they promise personalized, high-value feedback. But they also create legal and ethical risks when companies invoke real people’s reputations to make those tools more credible.
For journalists, authors, and academics, the case highlights a growing concern that professional identity itself is becoming raw material for AI products. A person may spend decades building a reputation for a particular style, standard, or area of expertise. If a software company can package that identity into a feature without permission, critics argue, it may dilute the value of that reputation and create confusion about what the person actually believes or would recommend. Those concerns are especially acute in journalism, where credibility and attribution are central to public trust.
For users, the dispute raises practical questions about transparency. A disclaimer may tell users that a named expert did not directly participate, but the product design may still imply a level of authenticity or endorsement that users find persuasive. Regulators and courts may increasingly examine whether such disclosures are enough when the surrounding interface suggests a stronger connection than actually exists. That issue extends beyond Grammarly to a wider range of AI assistants, writing tools, and recommendation engines.
Legal and Industry Implications
The case could become an important test of how traditional publicity and misappropriation laws apply to generative AI products. If the plaintiffs succeed, AI developers may need to secure explicit licenses or opt-in agreements before using real names as part of product features, even when the underlying output is machine-generated and accompanied by disclaimers. If Grammarly prevails, companies may argue that contextual disclosures and transformative AI use provide enough legal protection. At this stage, the court has not ruled on the merits.
The dispute also reflects a broader shift in AI governance. Companies are moving from general-purpose chatbots toward specialized agents that promise advice from domain experts. As that trend accelerates, the line between inspiration, simulation, endorsement, and impersonation becomes harder to manage. According to Grammarly’s own product materials, Expert Review was designed to let users “sharpen” their writing through “industry-relevant perspectives.” That language may be commercially appealing, but it also underscores why identity-based claims can trigger legal exposure.
From a business standpoint, the episode is a reminder that speed in AI product launches can collide with long-established rights around identity and commercial use. Even when a company responds quickly by disabling a feature, litigation can continue if plaintiffs argue that harm has already occurred. That is one reason this case is likely to be watched closely across the software, publishing, and media industries.
Conclusion
The proposed class action against Grammarly and Superhuman turns a spotlight on one of the most sensitive questions in AI product design: whether a company can borrow the authority of real people without their permission. The lawsuit, filed on March 11, 2026, alleges that Grammarly’s Expert Review feature commercially used the names and identities of journalists and other experts in a way that violated their rights. Grammarly has already disabled the feature and apologized, but the legal fight may still help define how AI companies represent expertise in the future.
Frequently Asked Questions
What is the Grammarly lawsuit about?
The lawsuit alleges that Grammarly’s “Expert Review” feature used the names and identities of journalists, authors, and other experts without their consent in a commercial AI product. The complaint says this amounted to misappropriation and unlawful use of identity.
Who filed the class action suit?
According to WIRED, investigative journalist Julia Angwin is the only named plaintiff at this stage. The case seeks to represent a broader proposed class of similarly affected individuals.
What was Grammarly’s Expert Review feature?
Expert Review was an AI tool that analyzed a user’s writing and generated suggestions framed through the perspective of named experts relevant to the topic or style of the text. Grammarly’s support pages said the feature was available in Superhuman Go and Grammarly docs.
Did Grammarly shut the feature down?
Yes. Superhuman told WIRED it had decided to disable Expert Review and would rethink the feature to give experts more control over how they are represented.
How much money is at stake in the case?
The complaint does not seek a specific dollar figure, but WIRED reported that it alleges damages across the proposed class exceed $5 million.
Why is this case important beyond Grammarly?
The case could influence how AI companies use real names, reputations, and identities in product design. A court ruling may help define whether disclaimers are enough or whether companies need consent before presenting AI output through the lens of real experts.