• Law4Startups
  • Posts
  • ⚖️Google's Gemma offline after complaints

⚖️Google's Gemma offline after complaints

In partnership with

Google Pulls Gemma After Defamation Allegations

Google has removed its open-weight AI model Gemma from the company’s AI Studio platform after U.S. Senator Marsha Blackburn accused the system of fabricating false claims of sexual misconduct about her. In a letter to CEO Sundar Pichai, Blackburn said Gemma responded to a prompt with entirely invented allegations, citing nonexistent sources and broken links. The senator called the result “an act of defamation produced and distributed by a Google-owned AI model.” Her letter follows a similar lawsuit filed by conservative activist Robby Starbuck, who claims Google’s AI defamed him by generating false statements that he was a sexual predator. Google’s policy lead acknowledged that hallucinations are a known risk in large language models but insisted the company is working to reduce them. In response, Google said it never intended Gemma to be used as a consumer-facing chatbot and has removed it from AI Studio while keeping it accessible via API for developers.

The Line Between Hallucination and Defamation

The case spotlights an emerging legal question: when an AI model “hallucinates,” who is responsible for the harm? Tech companies have historically shielded themselves by framing such errors as technical artifacts, but as AI-generated falsehoods increasingly resemble defamatory speech, that defense is wearing thin. Blackburn’s letter pushes the issue into the political spotlight, arguing that hallucinations aren’t mere glitches but legally actionable fabrications. If courts begin to view AI outputs as publisher speech rather than neutral computation, it could expose model providers to the same defamation liabilities faced by traditional media. That would dramatically reshape how companies deploy and monitor AI products, particularly open, lightweight models that are harder to control once released.

Guardrails Are No Longer Optional

For founders developing or integrating AI models, this controversy is a wake-up call. Even open models can carry reputational and legal risk if their outputs harm individuals or businesses. Startups should ensure that use cases are clearly defined, deploy content filters and disclaimers for user-facing tools, and document their model training data and safety processes. More importantly, founders should avoid marketing or deploying general-purpose models for factual queries without verification mechanisms — a growing red flag for regulators and investors alike. The lesson from Gemma: once your AI speaks in the real world, it can also defame in the real world.

In addition to our newsletter we offer 60+ free legal templates for companies in the UK, Canada and the US. These include employment contracts, investment agreements and more