How AI is transforming the role of compliance in 2026

AI

Since late 2022, the rise of artificial intelligence (AI) and its role on operational practices have evolved at a breakneck pace. In the area of compliance, this is no different, with businesses scrambling to future-proof their processes to keep up with the competition.  

No longer is AI operating as a standalone control. The technology is being embedded directly into everyday workflows at a rapid pace. This brings with it a wide range of risks. For Vall Herard, CEO and co-founder of Saifr, this shift fundamentally changes the risk profile.

“When AI is embedded directly into workflows, a single error can scale much faster than human oversight can see it,” said Herard. Herard points to three key risks that surround this, with the firm one being that cascading failures can happen much more quickly, especially in systems where you have multiple models working as multiple agents.

Herard explained, “A single hallucination or a single flaw in the logic in one part of the system can get propagated down to other subsystems and basically poison the decision making of all the downstream systems from there. Consequently, I think that a lot of the work at the implementation stage around understanding the limitations of the model, understanding the use cases which lend themselves to higher rates of hallucination, are key areas we focus on in doing implementation with clients.”

Another risk raised by Herard was that of the ‘confused deputy’ problem, which is essentially the idea that if you have a system prone to hallucination and being tricked to give a specific set of answers on a consistent basis, this is something that is subject to systemic failures and needs to be tested for.

The final risk suggested by Herard was long-term memory corruption or memory poisoning.

Herard explained, “The fact is, a lot of these agent-based systems have long-term information, so the extent to which the bad information is retained within the memory of the agent, this may lead to outcomes where people choose an agent without doing the proper validation and checking on a consistent and repeated basis. So, a model risk management framework should be in place to mitigate these risks.

How AI alerts change compliance

Real-time AI alerts are changing the way compliance issues are identified and addressed.For Herard, the real shift is cultural as much as technical.

“Real-time alerts, when paired with an efficient workflow system, fundamentally change the nature of the work,” he says. Compliance moves from reactive investigation — hunting for information, chasing status updates — to proactive resolution. Issues are surfaced immediately. Decisions are made faster and friction drops.

The difference, he argues, is alignment. In traditional compliance environments, progress often depends on scheduled check-ins: status meetings designed simply to ensure everyone is looking at the same information. Those meetings introduce delay. They create latency in decision-making. Real-time AI alerts, by contrast, generate what Herard describes as “hyper-transparency.” Every stakeholder sees the same update at the same moment. The result is asynchronous alignment. Instead of convening meetings to establish basic facts, teams manage by exception.

“If the AI is working the way it is supposed to, you’re getting all the real time alerts, the hyper transparency, that make the organisation overall more efficient,” said Herard. Roughly 95% of the issues move through the workflow without the need for collective debate, and alerts flag what needs mitigation. Relevant stakeholders review the same information, agreement is reached quickly, with the remaining 5% requiring escalation and discussion.

This, for Herard, is where a lot of the value realisation in the ROI comes in. Clients using Saifr in marketing communications compliance, for example, report fewer status meetings, shorter review cycles and more focused interactions between business and compliance teams.

He said, “I think this all ties back to the initial premise of us building Saifr, which is to explore how we remove inefficiency and friction so that we can make these processes more transparent and faster and get to the decision more quickly.”

Why AI risk has become a board-level issue

Over the last couple of years, AI risk has become a board-level issue, rather than a purely technical or operational one. What is the reason for this?

“As we move into 2026, and we certainly saw this through 2025, AI has gone from being a CIO-led technical project to a permanent board-level imperative,” he says. What was once treated as an experimental add-on is now embedded in core workflows. AI is no longer “a plug-in thing that’s off to the side.” It is becoming a driver of competitive survival.

Put simply, Herard believes firms that embed AI into how they operate will outpace those that do not. The efficiency metrics already bear that out. But with that advantage comes responsibility, and that responsibility cannot sit solely with the technology function.

Boards carry fiduciary duties of oversight. And when AI systems influence business outcomes, that duty extends to how those systems are governed. “A company cannot simply say, ‘the AI hallucinated’ and that resulted in a bad outcome,” Herard notes. Liability does not evaporate because a machine produced the error. In certain circumstances, directors themselves can face exposure for failures of oversight— a question tied directly to the business judgment rule.

The reality arising from this is that sharper boardroom conversations are being had. Herard cited taking part in a panel at a mutual fund board members’ association, where AI governance was top of mind for those involves. Across the wider sector, directors are asking how they should interpret their responsibilities, not just in theory, but operationally.

Multiple forces are converging at the same time. For example, hallucinations and model failures are proving to be governance issues, not merely technical glitches. Second is accountability. Herard cited a previous MIT study that found roughly 300 companies spent a combined $40bn on AI initiatives in a single year. The key questions Herard asks are how many of these instances exhibited a positive ROI, and whether the companies’ boards actually oversaw and overlooked all of that.

A third area is reputational integrity. As with ESG before it, AI is generating its own version of ‘AI washing’, claims Herard, where companies are overstating or mischaracterising their capabilities. “The extent to which every company is now an AI company,” Herard says, means AI risk is enterprise risk. And enterprise risk lives at board level.

Operational conversations with CIOs and CTOs continue. But increasingly, Herard finds that directors themselves are engaging directly — asking how to ensure that the AI, in his words, is “safer to our board.”

Bullet-proofing AI-driven compliance

The million-dollar question for many across a wide range of sectors right now is how do organisations ensure AI-driven compliance tools are transparent, explainable and defensible to regulators?

For Herard, accountability has shifted from high-level policy statements about how the industry thinks about AI, to something far more concrete: having verifiable, data-driven governance documents. People, he claims, want to understand how many false positives are being generated, the true positive rate of a system, and how often checks are being made to ensure there is no drift occurring.

These are operational metrics, not philosophical commitments. And they form the backbone of defensibility. But numbers alone are not enough. “There’s the narrative explanation versus the quantitative reporting,” Herard notes. Trust requires both. Data demonstrates control; explanation makes that control intelligible.

In practice, firms are pursuing several approaches. One is traditional explainable AI, techniques that attempt to translate mathematical model weights into human-understandable reasoning. Herard gives the example of identifying which input variables carried the greatest influence in a given decision. Firms may generate feature-importance rankings or visual heat maps to illustrate which factors mattered most.

The challenge, Herard observes, is accessibility. “It can be very abstract, and sometimes it’s hard for a non-technical person to really grasp and understand what some of the factors are.” For non-technical stakeholders, particularly at the board level, a heat map of weighted variables doesn’t always translate into genuine understanding. The problem can become even more complex when models rely on latent variables that have no observable meaning.

Another approach gaining traction, Herard states, is counterfactual explanation. Instead of asking why the model reached a decision in purely statistical terms, the system Is prompted to answer a more practical question surrounding what would have needed to change for the outcome to be different.

A third technique involves breaking complex models into local, interpretable steps, said Herard, effectively decomposing a sophisticated system into a sequence a human can follow from point A to point B. The aim is traceability, with the ability to walk through the chain of reasoning and see how a specific conclusion was reached.

“It’s really about having some transparency into the inner workings so that at the board level, they can explain it in a natural way instead of a technical way, making it easier to understand,” said Herard.

Balancing act

How can businesses balance time savings and efficiency gains with responsibility for the outcomes AI influences? Here, Herard makes clear it isn’t that easy to pinpoint.

He explains, “In many instances where AI has been injected, there isn’t a historical benchmark to go against. It isn’t as if people in the past who were looking at compliance frameworks were keeping track of how much time they were spending on a specific task. Consequently, comparing that against a model where the result is almost instant is very hard to judge on a historical basis.”

One of the decisions Saifr has made here to help in this conversation is to create benchmarks where it takes a set of tasks and gives it to a team of qualified individuals with different levels of experience.

“We come up with a time estimate for giving a task to a model or an agent and find out what is the average time savings you can see out of this process, then embed some of that knowledge into our underlying models,” said Herard. “When you are running a Saifr model, you are provided with an estimated time-saving based on those human trials across a team of people, so that you can see the efficiency gain in the process.”

From here, Saifr encourages businesses to try and run that exercise internally. In the Herard’s view, this is where its going to differ across firms based on the workforce that they have in place and their level of experience.

Herard concluded, “Once you have this estimate of time, then you essentially get to a position where you can make an informed decision in terms of what is the real gain in time savings and efficiency, with the responsibility for it to be an explainable outcome of the AI model.”

Read the daily RegTech news

Copyright © 2026 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.