Irregular Raises $80M to Secure Frontier AI Models

Irregular Secures $80 Million in Funding to Advance AI Security
AI security specialist Irregular announced a new funding round of $80 million on Wednesday. The investment was spearheaded by Sequoia Capital and Redpoint Ventures, with additional participation from Wiz CEO Assaf Rappaport.
According to a source familiar with the transaction, this funding round establishes Irregular’s valuation at $450 million.
The Future of AI Interaction and Security
Co-founder Dan Lahav explained to TechCrunch that the company anticipates a significant shift in economic activity. He believes much of this activity will stem from interactions between humans and AI, as well as between AI systems themselves.
Lahav further stated that this evolving landscape will inevitably create vulnerabilities across existing security infrastructures.
Irregular’s Role in AI Evaluation
Previously operating as Pattern Labs, Irregular has already established itself as a key contributor to AI evaluation. Their research is referenced in security assessments for models like Claude 3.7 Sonnet.
The company’s work also extends to OpenAI’s o3 and o4-mini models.
Furthermore, Irregular’s SOLVE framework – a system for gauging a model’s ability to detect vulnerabilities – is now widely adopted throughout the industry.
Proactive Risk Detection
While Irregular has made strides in identifying existing model risks, the company’s current fundraising efforts are focused on a more forward-looking objective.
They aim to proactively identify and address emerging risks and behaviors before they manifest in real-world applications.
To achieve this, Irregular has developed a sophisticated suite of simulated environments for rigorous model testing prior to deployment.
Simulated Attack and Defense Scenarios
“We employ complex network simulations where AI agents function as both attackers and defenders,” explains co-founder Omer Nevo.
This allows the company to assess the resilience of defenses when new models are introduced, pinpointing areas of weakness.
Growing Concerns in AI Security
The AI industry is experiencing heightened scrutiny regarding security, as new risks continue to surface.
OpenAI recently undertook a comprehensive overhaul of its internal security protocols, specifically addressing concerns related to potential corporate espionage.
AI’s Dual Role in Software Vulnerabilities
Concurrently, AI models are demonstrating increasing proficiency in identifying software vulnerabilities.
This capability presents both opportunities and challenges for security professionals and malicious actors alike.
A Continuous Pursuit of Security
For the founders of Irregular, this represents just the first in a series of security challenges brought about by the increasing sophistication of large language models.
“While the primary focus of leading AI labs is the creation of increasingly advanced models, our mission is to secure those models,” Lahav emphasizes.
“However, given the dynamic nature of this field, there is inherently a substantial amount of future work to be done.”





