Irregular, the world’s first frontier AI security lab, has secured $80m in fresh funding as it looks to set new industry standards for safeguarding advanced artificial intelligence systems.
The Series round was led by Sequoia Capital and Redpoint Ventures, with participation from Swish Ventures and several angel investors, including Wiz CEO Assaf Rappaport and Eon CEO Ofir Ehrlich.
The company, formerly known as Pattern Labs, works with some of the world’s most prominent AI research organisations, including OpenAI and Anthropic. It focuses on identifying how next-generation AI models could carry out real-world threats, such as antivirus evasions or autonomous offensive actions, and building defensive measures before these technologies are deployed.
Irregular operates by running controlled simulations on frontier AI models to assess both their potential for misuse in cyber operations and their resilience against hostile attacks. Its services provide AI developers and deployers with a secure method of uncovering vulnerabilities early, enabling safeguards to be integrated at an early stage.
The fresh funding will be channelled towards expanding Irregular’s research and development, as well as scaling its work with AI creators and policy stakeholders to strengthen defences before new models reach the public. The company has already established significant traction, with its evaluations referenced in OpenAI’s system cards for GPT-4 o3, o4 mini and GPT-5, and its SOLVE framework adopted by both the UK government and Anthropic.
Irregular has also made notable contributions to industry standards and policy. It co-authored a whitepaper with Anthropic on using Confidential Computing to protect AI model weights and user privacy, and collaborated with RAND on a seminal paper addressing AI model theft and misuse, shaping Europe’s policy discussions on AI security. Researchers at Google DeepMind have also cited its work in a study on emerging AI-driven cyber threats.
Irregular CEO Dan Lahav said, “Irregular has taken on an ambitious mission to make sure the future of AI is secure as it is powerful. AI capabilities are advancing at breakneck speed; we’re building the tools to test the most advanced systems way before public release, and to create the mitigations that will shape how AI is deployed responsibly at scale.”
Sequoia Capital partner Shaun Maguire added, “The real AI security threats haven’t emerged yet. What stood out about the Irregular team is how far ahead they’re thinking. They’re working with the most advanced models being built today and laying the groundwork for how we’ll need to make AI reliable in the years ahead.”
The company has already reached millions in annual revenue and is positioning itself as a critical player in ensuring frontier AI is deployed safely and securely.
Copyright © 2025 RegTech Analyst
Copyright © 2018 RegTech Analyst





