ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub

OpenAI Ex-CTO Testifies: Sam Altman Systematically Lied About AI Safety Protocols

Sam Altman Mira Murati OpenAI AI safety corporate governance deposition GPT model
May 06, 2026
Source: The Verge AI
Viqus Verdict Logo Viqus Verdict Logo 8
A Governance Crisis, Not a Model Flaw
Media Hype 7/10
Real Impact 8/10

Article Summary

During ongoing legal proceedings, former OpenAI CTO Mira Murati provided damning testimony, stating that Sam Altman had misled her regarding the necessary safety standards for a new AI model. Specifically, she testified that Altman falsely asserted that the company's legal department had determined the model was exempt from the standard internal deployment safety board review. Murati’s testimony highlighted a pattern of alleged operational misconduct, citing previous testimony from co-founder Ilya Sutskever and former board member Helen Toner, who both described Altman as consistently undermining executives and lying to the board. This revelation refocuses the dispute from merely governance disagreements to core issues of operational safety and ethical integrity within the AI development lifecycle.

Key Points

  • Mira Murati provided sworn testimony accusing Sam Altman of lying about AI model safety standards, specifically regarding the bypass of the internal deployment safety board.
  • The testimony reinforces previous allegations from other key figures, suggesting a pattern of Altman misleading both employees and the board about internal safety protocols.
  • Murati criticized both Altman's leadership and the Board's decision to oust him, stating the company was at 'catastrophic risk of falling apart' at the time.

Why It Matters

This is high-signal news for industry professionals because it undermines the core trust required to scale frontier AI responsibly. Murati's testimony takes the legal battle beyond corporate infighting and frames it as a failure in fundamental AI governance and safety practices. Any enterprise relying on OpenAI’s technology must now weigh the increased operational risk and regulatory scrutiny implied by such allegations, potentially necessitating a reevaluation of how they manage their own model deployment vetting and reliance on foundational models.

You might also be interested in