What regulators will expect

regulators

AI is no longer peripheral – it is embedded in decision-making, risk, and control. As that shift accelerates, tolerance for ambiguity around accountability is collapsing. Regulators are no longer asking whether firms use AI, but whether they understand, control, and can stand behind it.

This is where the accountability gap becomes real. The core tension remains – machines can act, but responsibility is human. And while firms debate how far to push AI, regulators are converging on a clearer standard of what “acceptable” looks like.

Their focus is not philosophical, but practical: can you evidence control, explain outcomes, and intervene when things go wrong? The principle is simple—if AI materially influences an outcome, it must be governable to the same standard as a human decision-maker, if not higher. The question is no longer whether AI will be regulated. It’s whether firms are ready for how.

The focus of the fourth and final part of the Accountability Gap Series will focus on what regulators will expect around AI and compliance. This completes the full series, following the previous instalment which asked: how do firms govern AI without slowing compliance to a crawl? The first two had focused on the accountability problems that haven’t been solved within AI in compliance and what decisions machines can be allowed to make, respectively.

Accountability signals

One of the first key questions to ask on this topic is what accountability signals regulators are sending already.

According to Mike Lubansky, SVP of strategy at Red Oak, regulators have been consistent on one core principle: accountability cannot be outsourced. Not to vendors, and not to AI. What is changing, he remarks, is not who is accountable, but what firms must now be able to demonstrate.

Lubansky states that regulators are sending clear signals in three areas. Firstly, in outcomes to process plus evidence. “It is no longer enough to show that a decision was reasonable. Firms must show how the decision was reached, what data and logic were used, and what controls were in place at the time. This reflects a broader shift toward evidence-based supervision, where decisions must be reconstructable.”

The second area outlined by Lubansky is from point-in-time controls to continuous oversight. He explained, “Static governance is no longer sufficient. Regulators increasingly expect: ongoing validation of automated systems, monitoring for drift and performance degradation, and documented change management for models and rules. This aligns with what firms are already experiencing: automation is not a one-time implementation — it is a continuously governed system.”

The third and final one is human in the loop to human accountability by design. Here, Lubansky detailed that early guidance emphasised human review. However, that expectation is evolving.

“The focus is now on clear ownership of decisions and systems, defined escalation and override mechanisms, and demonstrable supervisory engagement. In other words, regulators are less concerned with whether a human touched every decision, and more concerned with whether accountability is structurally embedded in the process,” said Lubansky.

Lubansky finished on this point by stressing that regulators are not asking firms to slow down automation – they are instead asking them to prove that automation operates inside a system of control that is visible, testable and accountable.

Regulators are not asking firms to slow down automation. They are asking them to prove that automation operates inside a system of control that is visible, testable, and accountable.

Meanwhile, Scott Nice, CRO at Label, made clear his view that regulators have been steadily widening expectations for a number of years now. This hasn’t been coming necessarily by introducing entirely new frameworks, but by tightening how existing regulations are interpreted and enforced.

“What is becoming clear is that regulators are no longer satisfied with firms simply having controls in place, they expect those controls to be demonstrably effective, consistently applied, and fully auditable,” said Nice.

While regulatory approaches still vary by jurisdiction, with some being more audit-driven and others more enforcement-led, there is a common direction of travel, Nice believes.

He added, “Firms are expected to be able to evidence not just what decisions were made, but how and why those decisions were reached. The signal being sent is that compliance must move from being reactive and procedural to being defensible, traceable, and embedded within core operations.”

Areg Nzsdejan, CEO of RegTech firm Cardamon, emphasised that regulators don’t want to wait for a crisis to set expectations on AI – they’re already trying to signal this through existing frameworks.

He said, “The FCA’s Consumer Duty expects firms to evidence customer outcomes – not just processes. Where AI influences those outcomes, firms need enough explainability, governance and monitoring to understand, challenge and evidence its impact.”

Nzsdejan gave the example of the PRA’s model risk management principles reinforce that banks need robust controls over models, including documentation, validation and oversight. “And under the EU AI Act, certain AI use cases in financial services – such as creditworthiness assessments and life/health insurance risk assessment or pricing – are explicitly treated as high-risk. None of these are targeted specifically at compliance AI.”

Nzsdejan finished on this point by making clear that they are setting the underlying expectation that firms can show their AI works, when it works, and what happens when it doesn’t.

“The signal is consistent across jurisdictions: accountability follows the decision, not the technology,” he said.

Supradeep Appikonda, COO and co-founder of 4CRisk.ai, added his view that regulators are beginning to require firms that use AI to prove how they control it, backed up by Conformity Assessments and Mitigation Plans.

He gave the example of the FTC and SEC want to see a “responsible person” in place to oversee specific automated decisions made by AI. “That person and their team must understand the AI’s logic well enough to intervene, override, or shut down the system if it deviates from expected performance,” he Appikonda said.

Additionally, Ryan Swann, CRO of Risksmart, said that regulators are making one thing deeply clear – accountability doesn’t disappear with automation.

He said, “From the FCA to global supervisory bodies, there’s a consistent signal that firms must retain clear ownership of outcomes—regardless of how decisions are made. “Black box” systems are no longer acceptable without traceability, governance, and human oversight.”

Allison Lagosh, V.P. and head of compliance for Saifr, made the final point on this topic, stating that regulators are signalling that accountability remains fully human and unchanged regardless of AI use.

She gave the example of the 2026 FINRA Annual Regulatory Oversight Report, with key signals including a number of things. The first was that technology neutrality is not flexibility.

She explained, “FINRA explicitly states that existing rules apply without exception to GenAI. Firms cannot argue novelty, experimentation, or vendor complexity to dilute accountability under Rules 3110 (Supervision), 2210 (Communications), and recordkeeping obligations.”

Secondly, supervisory responsibility cannot be delegated to machines. In their report, FINRA detailed that AI may assist decisions, but registered principals remain responsible for outcomes. “This is especially pointed out where AI is used for marketing content, customer communications, AML alerts, and recommendations,” said Lagosh.

The third point referenced is that governance of visibility matters as much as outcomes. “Regulators are signaling that they expect firms to show how AI is governed—approval processes, testing evidence, escalation paths, and senior-level reporting—not just that controls exist on paper.”

In the view of Copla CEO & Co-founder Aurimas Bakas, the clearest signal is structural: regulators are moving beyond narrative.

He said, “Under DORA’s ICT third-party reporting requirements, firms must submit machine-readable, field-level data — register entries, contract classifications, dependency mappings — in place of policy documents. The FCA’s incoming register of material arrangements under PS26/2 follows the same logic. When a regulator builds a data intake schema, they’re telling you exactly what they intend to audit.

“That’s the accountability signal. The firms that read it early are already building the underlying data infrastructure. The rest will be doing remediation under pressure.”

What must be proved about AI

“What must firms prove about AI decisions before something goes wrong?” is fast becoming a central test of modern compliance – and the standard is exacting.

As Nice makes clear, the question is not whether AI is perfect, but whether it is controlled. “Firms need to be able to demonstrate that any AI-driven decisioning, particularly where more autonomous or agentic models are being used, is operating within a clearly defined and controlled framework,” he says. Regulators are less concerned with flawless outputs than with evidence that systems are governed, tested, and understood.

That evidence has to be concrete. Firms must be able to show “how models are configured, what data they are trained on, how outputs are validated, and what controls exist to detect and manage errors.” In short, the AI lifecycle needs to be transparent and defensible end-to-end.

Critically, this cannot be static. “Governance cannot be static, it needs to be continuous, with ongoing monitoring, testing, and recalibration,” Nice adds. Oversight has to evolve alongside the models themselves.

The reason is scale. “A single flaw in logic or data can be multiplied across thousands or millions of decisions.” That amplification effect is what regulators are zeroing in on. The implication here is that firms must prove, in advance, that they understand the risks of AI at scale and have built safeguards to contain them.

Cardamon CEO Nzsdejan cuts more directly to the gap most firms are still missing. “This is the question most firms are underestimating,” he says, warning that AI accountability is too often framed as a future regulatory task rather than a current operational demand. “The instinct is to treat AI accountability as a regulatory problem. That’s the wrong lens.”

In reality, expectations are straightforward and immediate. Firms must be able to show that they understood what the AI was deciding and why; that accountability sat with a named individual; that decisions can be reconstructed end-to-end—“what data, what logic, what outcome”—and that controls were in place and tested.

The imbalance here is stark for businesses. “Most firms can answer question two, very few can confidently answer one, three, or four.” Assigning ownership is easy; proving understanding, traceability, and control is not -and that is exactly where regulators will focus.

Lubansky, meanwhile, detailed that the regulatory bar is quietly shifting from reactive explanation to proactive defensibility.

He commented, “Firms should assume they will be expected to prove, in advance, that decisions are reconstructable, decisions align to firm-defined policy, oversight is active, and the system is built for audit not just output.”

On the former, he explained that for any given outcome, firms must be able to show the inputs used, logic or policy applied, model or rule version at the time, outcome produced and any human intervention or override. “Most firms cannot consistently do this today,” he said.

On having aligned decisions, Lubansky exclaimed that regulators are not evaluating AI in isolation, they are evaluating whether: the system reflects the firm’s stated risk appetite, policies are correctly translated into system logic, and decisions are consistent with regulatory obligations. If a system produces a “correct” outcome for the wrong reason, it is still a governance failure, he said.

Active oversight, Lubansky went on to, is showcased by firms demonstrating ongoing monitoring and testing, defined escalation paths, evidence of challenge and override and clear ownership of system performance.

Finally, on the system part, Lubansky explained this is where many firms fall short. “It is not enough that a system produces good outcomes. It must produce: traceable decisions, consistent documentation, and audit-ready evidence.”

Bakas has two parts to a response here. First, that a human reviewed and approved the output — with genuine oversight rather than passive sign-off. Second, that the logic behind the decision is traceable.

He said, “If an AI system generates a risk profile or a policy document, the firm needs to show which regulatory framework it was mapped to, what inputs shaped it, and when it was last reviewed. In any defensible compliance setup, the audit trail has to be native to how the decision was made.”

Swann added, “The expectation is shifting from reactive justification to proactive assurance. Firms need to demonstrate that AI models are explainable, tested, and governed before they’re deployed—not after an incident. That means clear audit trails, documented decision logic, and evidence that risks have been anticipated, not just managed post-event.”

Appikonda, on the other hand, said, “Regulators will ask for a firm’s code, and pre-incident paper trails when reviewing incidents. Firms must show that training and validation data was representative, high-quality, screened for bias, and tested with edge use cases and potential attacks.

“In addition, firms need to show that they are performing real-time logging of system performance and provide mandatory reporting of serious incidents or malfunctions within strict windows, often within hours.”

Lagosh pushes the standard even further upstream. The requirement is not to explain AI after the fact, but to evidence control before anything breaks. “Firms must be able to prove explainability, supervision, and control in advance—not after harm occurs.”

That proof rests on a few non-negotiables. First, traceability: firms need to be able to reconstruct decisions end-to-end—“prompts, outputs, model versions, data sources, and human reviews”—with evidence strong enough to withstand examination. Second, testing: not just at deployment, but continuously, with clear records covering accuracy, bias, drift, stress scenarios, and unintended consequences, particularly in high-risk areas.

Human oversight is equally explicit. “Qualified humans must review and approve high-impact outputs,” with the authority—and competence—to override the system where needed. And none of this works without clear ownership. Supervisory responsibility has to be defined across the organisation, from business lines to the enterprise level, including third-party exposure.

The conclusion is blunt: it is not enough to say nothing has gone wrong. Firms must be able to show that AI decisions were designed, tested, reviewed, and governed responsibly from the outset.

Is waiting for clarity a good idea?

Is waiting for clarity a defensible strategy or a risk? On this question, Lubansky stated that waiting for regulatory clarity is increasingly a risk, and there are several reasons for this.

He said, “While detailed rules are still evolving, regulatory direction is clear, and the underlying expectations remain the same. It is an extension of existing supervisory principles into automated environments.”

Lubansky emphasised that automation is outpacing governance, and without corresponding investment in governance and auditability, there’s a widening exposure gap. Many firms have already deployed automation, scaled AI-assisted workflows, and reduced human touchpoints.

“Enforcement will be backward-looking. Regulators will not assess firms based on what guidance existed at the time, they will assess whether the firm can explain and defend its decisions, whether appropriate controls were in place, and whether risks were reasonably foreseeable. The risk is not that firms move too quickly with AI — it is that they move quickly without building the accountability infrastructure required to defend it,” said Lubansky.

In their answer to this same question, Nice is unequivocal: it is becoming harder to defend by the day.

“Waiting for regulatory clarity is increasingly difficult to justify,” he says, particularly for firms already deploying AI in live environments. The direction of travel is clear—even where formal guidance is still evolving, regulators are signalling that responsibility does not wait. “The responsibility sits with the firm to ensure control, regardless of whether detailed guidance exists.”

That shifts waiting from caution to exposure. “A firm that chooses to wait is effectively accepting unmanaged risk,” both operationally and from a regulatory standpoint. The bar is not perfection, but proof—evidence that risks are understood and actively managed.

Swann added that waiting for regulatory clarity might feel safe, but in practice, it creates exposure. “The direction of travel is already visible: more scrutiny, more accountability, and higher expectations around transparency. Firms that delay action risk falling behind—not just in compliance, but in operational resilience.”

Appikonda was even more blunt on this topic, stating that inaction here is a ‘form of negligent oversight’. He said that firms are expected to be ‘compliant by design’ and cannot shift the blame to the AI software vendor.

He explained, “Under the EU AI Act and various U.S. state laws, the burden of proof has shifted to deployers using AI, who are now responsible for the failures of the provider companies that built the AI model.  Another new concept shows how firms can be proactive: Regulators are requiring pre-use notices. As an example, if an AI is going to price a product dynamically or screen a resume, the consumer must be told before the interaction and be able to “opt-out”.”

Lagosh is more direct: waiting is not caution—it is exposure. “Waiting for clarity is a risk—and regulators are signaling little patience for delay.”

The reason is simple. The baseline already exists. “The message is not ‘wait for AI-specific rules,’ but rather ‘apply existing rules now.’” Firms that hold back risk being seen not as prudent, but as knowingly under-supervised.

Supervision, in this context, is judged on effort and structure, not perfection. Regulators expect to see tangible progress—clear inventories of AI use, updated policies, and evidence of training and oversight. Inaction is harder to defend than an imperfect but active framework.

And once something goes wrong, the window closes quickly. “Post-incident explanations are weaker than pre-incident controls.” With risks like bias, hallucinations, and data exposure already well understood, firms will be pressed on why safeguards were not in place earlier.

The conclusion is unambiguous, in that waiting is not neutral, it compounds regulatory, reputational, and enforcement risk. If firms intend to rely on AI, they are expected to be governing it now, not later.

Nzsdejan was also clear on the topic, stressing it is a risk, not a strategy. He said, “The firms waiting for regulators to publish explicit AI rules before building accountability frameworks are making a costly assumption: that current frameworks don’t apply. They do.”

“You don’t need an AI-specific regulation to be expected to explain a decision your AI made. Consumer Duty applies. Model risk guidance applies. Senior manager accountability applies. The enforcement risk is a firm that cannot explain a decision and has no mechanism to reconstruct it when a regulator asks.”

Bakas remarked, “It’s a risk that firms are mispricing. The argument for waiting — “the regulation is still settling” — collapses the moment something goes wrong, because regulators will ask what was in place at the time. DORA’s RTS on ICT third-party reporting came into effect March 31.

“Firms that filed incomplete or erroneous submissions are already receiving correction requests. The firms in the best position right now are the ones who treated the first filing as a capability-building exercise, and built something they can iterate on.”

Remaining accountable

In the opinion of Tim Khamzin, founder and CEO of Vivox AI, regulators aren’t asking whether you use AI anymore – they’re asking whether you remain accountable for it.

He remarked, “If a firm cannot clearly explain why a decision was made, who validated it, how it can be challenged, that is a control failure, not a technology issue. What’s changing is the burden of proof.

“Before anything goes wrong, firms need to demonstrate that decisions are traceable, that governance is embedded, that human oversight is real, rather than something symbolic.”

As Khamzin remarked, that’s becoming the baseline, not best practice. “Waiting for regulatory clarity is the wrong strategy. By the time it arrives, expectations will already have moved. The firms that are getting ahead are treating AI decisions like regulated decisions today, with evidence, auditability and clear ownership from day one.”

Lubansky concluded the discussion with a simple point, “Regulators are unlikely to resist AI adoption in compliance. The real point of scrutiny will be whether firms can stand behind the decisions these systems produce, with clear evidence, structured governance, and demonstrable control.”

Read the daily RegTech news

Copyright © 2026 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.