Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

Irregular Secures $80M to Combat Evolving AI Security Risks

AI Security Funding Irregular TechCrunch Large Language Models Venture Capital Security Evaluations
September 17, 2025
Viqus Verdict Logo Viqus Verdict Logo 9
Defense in Depth
Media Hype 7/10
Real Impact 9/10

Article Summary

Irregular, formerly Pattern Labs, has raised $80 million to tackle the increasingly complex security challenges posed by sophisticated AI models. The funding round, spearheaded by Sequoia Capital and Redpoint Ventures, highlights a critical need within the AI industry—assessing and mitigating risks beyond the established vulnerabilities of existing models. Irregular's unique approach, centered around simulating adversarial AI interactions within complex network environments, allows them to proactively identify ‘emergent risks’ before models are deployed. The company’s ‘SOLVE’ framework, widely used in industry security evaluations for models like Claude 3.7 Sonnet and OpenAI’s o3/o4-mini, is being expanded to focus on anticipating and addressing the novel behaviors of frontier models. This investment reflects a broader industry trend as companies grapple with the potential for misuse and unintended consequences of increasingly capable AI systems, particularly as they become more adept at finding software vulnerabilities.

Key Points

  • Irregular secured $80 million in new funding, valuing the company at $450 million.
  • The funding reflects a growing concern about the security of rapidly evolving large language models and the potential for ‘emergent risks’.
  • Irregular’s methodology, utilizing simulated environments and adversarial AI testing, is designed to proactively identify and mitigate novel AI security threats.

Why It Matters

This funding round is a significant indicator of the escalating security concerns surrounding artificial intelligence. As AI models become more powerful and adaptable, the potential for misuse and unintended consequences grows exponentially. This investment demonstrates that the AI industry recognizes the need for specialized security expertise to protect against these emerging threats and safeguard against potential harm. For professionals in cybersecurity, AI development, and risk management, this news highlights the critical importance of proactively addressing AI security challenges and the potential for significant disruption and liability.

You might also be interested in