Who owns compliance decisions in automated systems?

compliance

Automation is steadily moving from the margins of financial services into its operational core. Surveillance systems flag misconduct, onboarding platforms assess risk, and AI models increasingly recommend — or even execute — compliance actions. Yet as decision-making becomes embedded in automated systems, a fundamental question is becoming harder to answer: who actually owns the decision?

For decades, accountability in financial services was structurally simple. Humans made judgments, documented them, and regulators knew where responsibility sat. Automation complicates that model.

When compliance outcomes are shaped by machine learning models, external vendors, and complex data pipelines, the chain of responsibility becomes less visible – even as regulators continue to expect clear accountability.

How firms define accountability

A question on the mind of many in the RegTech industry right now is how businesses are defining accountability when compliance decisions are partially or fully automated.

For Janet Bastiman, chief data scientist at Napier AI, responsibility still sits with the human even for fully automated decisioning, which is why we recommend human-in-the-loop approach. Regulators, she outlines, have been clear about the human accountability in frameworks even when provisioning for AI implementations in AML.

She explained, “Humans must remain in the loop because they are ultimately responsible for the decisions. AI can generate alerts, make recommendations, suggest rules and automations that better match analyst decisions, generate natural-language explanations, but these all have to be under the oversight of human responsibility.”

Similarly, Stephen Lovell, CPTO at RegTech firm Vixio, stressed that regulatory accountability hasn’t changed, as responsibility still sits with the regulated entity,  the accountable senior manager and ultimately the board.

“Automation does not transfer liability. It changes how inputs are gathered and processed – not who is answerable,” he said. “The most mature organisations will treat AI as a decision-support layer, not a decision-maker.”

Scott Parkin, Zeidler’s head of US, took the time to make clear that in financial services, all businesses are required to have a series of policies and procedures for their functions and the heart and soul of these policies is simple – accountability.

He said, “This industry has decades of legislative and regulatory history culminating in stringent requirements for accountability but there is one common thread, there must be a human accountable.”

As LegalTech and RegTech develop, mature and become more prevalent, Parkin expressed, the question for firms is how to integrate the technology into their existing and complicated policies and procedures that nevertheless end with a human being accountable.

He remarked, “The key for firms across financial services is deciding what aspects of their policies and procedures technology can perform without impacting the final accountability. Ultimately, firms need to determine how the accountable human oversees the integration of use.”

Companies, Label CRO Scott Nice, added, are increasingly confident in automating execution, bur are far less precise articulating accountability. Automation doesn’t transfer responsibility, it only operationalises pre-defined logic.

“When a system validates customer data, suppresses an alert, or generates a reportable outcome, that result reflects prior human decisions. Who approved the rule logic? Who defined materiality thresholds? Who accepted residual risk? The phrase “system decision” is misleading. Systems execute configured rules. They do not assume regulatory liability.”

For Nice, mature firms separate rule design ownership, escalation ownership and model validation ownership. “That separation is what regulators will probe when outcomes are challenged,” he said.

Where the responsibility sits

Where does the responsibility sit when automated decisions are challenged by regulators?

Here, Lovell suggests that when a regulator challenges an automated outcome, businesses must be able to explain three key things. First of all, the dataset – what information was used, and was it complete and current. Next is the processing logic, how was the information transformed into an output? Lastly, consistency and reproducibility – would the same inputs produce the same output?

“If firms can articulate those three layers, they move from “black box AI” to defensible, explainable automation. Without that, trust erodes quickly – both internally and externally,” says Lovell.

Meanwhile, Nice suggested that when regulators question an automated outcome, the issue immediately becomes governance, not technology. “The real question is: Why was the system configured this way? Responsibility sits with the regulated entity and, ultimately, accountable executives,” he said.

For Nice, if no one is able to explain the decision logic, the change management process, the rationale for thresholds and the oversight framework, then the weakness that exists is structural. “Automation amplifies governance quality. It does not replace it,” he explained succinctly.

Bastiman, on the other hand, thinks that decisions themselves should not be automated, stating, “For example, the auto-discounting of alerts based on risk-based scoring should align to a rule that had human oversight before implementation and is clearly linked to a risk-based assessment.

“The expectations from regulators regarding explanations for decisions have not changed, so while AI is a great tool in helping draw together the data points that may be used to make a decision as well as helping generating natural language explanations to populate SARs, the decision to discount, escalate or report to the regulator has to be made by a human.”

This, for Bastiman, is why it is so essential that risk-based assessments be well documented and operationalised, and why AI cannot be opaque in compliance workflows.

She continued, “Analysts need to understand exactly why the AI makes suggestions or raises flags, so they can document their decisions. Any rules created from AI recommendations would be based on collating the best of human decisions, with well-evidenced historical alert decisions as the basis for future automated discounting.

The view held by Parkin is that it would take an ‘earthshattering’ and foundational regulatory change for compliance decisions and accountability for such decision to be automated.

He remarked, “Human accountability is woven into the fabric of the legal and regulatory framework for financial services – a fortiori for firms subject to any level of fiduciary duty – and this is not likely to change any time soon.”

Parkin described it a ‘Sisyphean exercise’ to try to use AI tools to make compliance decisions. “These AI tools are incredible and will make firms astronomically more efficient, optimizing nearly every area of compliance except the final decisions where a human is accountable.”

He continued by stressing that regulators will expect the same, and firms will need to be careful when implementing AI tools to ensure they can demonstrate how a human is overseeing the technology.

“Even a shred of evidence indicating that a firm’s compliance function is delegating their accountability to technology is an enormous risk,” Parkin finished.

Sufficient human oversight

Another key question being posed here is how much human oversight is considered sufficient in automated compliance workflows.

On this, Bastiman comments, “Oversight should be a given, all regulatory frameworks demand transparency, explainability and auditability for financial crime compliance. Prescriptive approaches to ‘how much oversight’ are likely to result in the check-box compliance approaches of old, whereas most regulators are transitioning to a more outcomes-based approach with a focus on collaborating to define best practice.”

For Bastiman, the goal should be to ensure that the humans-in-the-loop can explain the automations, and that any automated explanations are natural language, meaning they can be understood by humans.

Oversight should be proportionate to risk, stated Lovell. For the Vixio CPTO, low-impact, repeatable workflows may only require review by exception.

He said, “High-impact regulatory interpretation – particularly where customer harm or enforcement exposure exists – should require documented human rationale. Human sign-off is not necessarily a sign of mistrust in AI. More often, it reflects the contextual nature of compliance.”

He added that two businesses can read the same rule and reach different but ultimately legitimate conclusions based on factors like risk appetite, customer base, business model and jurisdictional footprint.

“No general-purpose model inherently understands those nuances as they apply to your business,” Lovell said.

Lovell also provides a succinct answer to whether human signoffs reflect risk, or a lack of trust. “Neither, they reflect responsibility. To fully automate interpretive regulatory judgement, a firm would need vast domain-specific training data, significant compute infrastructure and stable, agreed interpretations across regulators. That environment rarely exists.”

Instead, what Lovell believes is emerging as the sustainable model is AI for preperation and humans for accountability.

He explained, “The opportunity is not to remove the human from compliance. It is to elevate the human, giving them better information, clearer impact analysis and structured workflows that make decisions explainable. If we get that balance right, AI does not create an accountability gap, It closes one.”

For Label’s CRO, meanwhile, he believes on this point that companies often drift toward extremes. “Either they re-check automated outputs manually, undermining efficiency, or they treat the system as a black box and assume statistical confidence equals defensibility,”.

For Nice, sufficient oversight should be risk-based, documented, periodicially validated and exception-driven.

He expressed, “If every decision requires human duplication, the architecture is flawed. If no one can explain a decision path, oversight is insufficient. The correct model is defined intervention thresholds with structured human escalation, not parallel processing of automated outputs.”

The final point on this question comes from Parkin, who explained in his view that this is one area where compliance departments do have some insight into what is expected by regulators.

He said, “The use of technology to assist, augment, and optimize the compliance function is not new in any way. Compliance teams have dealt with this for a long time and generally understand what level of oversight by humans is needed to deploy technology for use in the compliance infrastructure.”

The question here, Parkin waxes, is whether the use of AI technology requires more, or potentially less, human oversight than non-AI technology deployed historically.

“I am of the opinion that the current legal and regulatory framework governing financial services, specifically their compliance teams, for deploying technology is sufficient and clear,” said Parkin.

He continued, “The current processes can be replicated for AI with a caveat that the human overseeing the AI technology needs to understand it, as opposed to simply being able to use the internet. While the quantity of human oversight might be generally the same, the quality of the oversight might, and perhaps should, be higher as the humans involved would need to be more technologically savvy than was required historically.”

Are governance frameworks keeping up?

One of the biggest challenges for many in the industry to consider is whether the governance frameworks running adjacent to automation are keeping pace. On this, Bastiman is of the mind that governance frameworks have often dragged behind operations under the weight of regulatory burden.

“But with the shift to outcomes-based approaches by the likes of the Financial Conduct Authority, governance is becoming less onerous – although more important,” she remarked. “Working with the right partners can reduce the governance overhead, as the governance of underlying AI models is managed by the solution provider.”

Bastiman added that governance of self-built or black-box AI models could be challenging for financial institutions, so picking partners with a compliance-first approach to AI and can help to leverage automations without compromising on compliance.

Parkin, however, is more bearish on this point, believing they are not keeping pace. “A firm’s policies and procedures are extensive, complicated, and typically a result of multiple iterations that have changed and evolved with the market and the firm itself over time. However, similar to how policies and procedures took years to keep pace with the benefits of the internet, they are generally not keeping pace.”

More frustrating for compliance teams, Parkin added, is that policies and procedures not only need to keep pace with the current AI tech being used, but also with the unparalleled speed with which AI itself is evolving.

Nice was similar on this point, stating that in many cases, tech adoption is outpacing governance sophistication.

“Boards approve automation strategies without always interrogating rule version control, audit traceability, change approval protocols and model drift.” In Nice’s view, governance frameworks need to evolve to include formal rule ownership, automated decision logging, defined change control and periodic rule effectiveness testing. “Automation is not the risk; the risk occurs when automation is unsupervised,” Nice finished.

Lovell, meanwhile, stressed that governance is improving, but unevenly. Many firms initially approached AI governance as a technology risk problem. In reality, it is a regulatory accountability problem”

The businesses in Lovell’s mind that are moving fastest are embedding clear usage boundaries, escalation paths, audit trails and traceability, defined human decision points and model version control.

Meanwhile, as detailed recently by Norm Ai, one of the persistent challenges in financial regulation is that governance frameworks rarely arrive before the technology they must oversee.

As Dan Berkovitz noted in discussion at the Central Park AI Forum, many of the most significant financial laws have historically emerged only after market failures – from post-Depression securities legislation to the reforms that followed the 2008 financial crisis.

“It’s very difficult to get prospective legislation, forward looking ahead, anticipating issues, and the political will to address them,” he said. “But after a crisis, there’s motivation.” For automated compliance systems, this raises an uncomfortable question: will accountability frameworks evolve before AI-driven decisions reshape how firms manage regulatory risk?

The accountability gap

Another perspective provided on this debate was by Andrew Davies, global head of FCC strategy at Comply Advantage, who made a point of discussing the accountability gap in AI-drive compliance, stating that this is often less about the technology and more about the transparency of the models and architecture beneath it.

He said, “At ComplyAdvantage, we believe that for a compliance decision to be truly safe to automate, it must be defensible. Responsibility sits with the firm, but that burden is only manageable when the AI can provide a single, immutable audit trail explaining the ‘why’ behind every action”

Davies mentioned that he sees a clear distinction is which actions and decisions are suitable for automation. “Tasks involving the triage of low-risk, high-volume false positives – which can currently account for up to 85% of an analyst’s manual workload – are not just safe to automate; they are a defensive necessity in an era of real-time payments. By using agentic AI to remediate these noisey cases, firms allow their human experts to focus on the truly complex 10-15% of cases that require nuanced, human judgment.”

In the view of ComplyAdvantage’s Davies, the risk isn’t over automation per-se – but black-box automation.

He explained, “Regulators correctly demand to know the underlying logic of a decision. This is why human sign-offs should not reflect a lack of trust in AI, but rather a validation of a glass box approach – where natural language rules and clear reasoning chains allow compliance officers to stay in the driver’s seat.”

Ultimately, Davies remarked, AI should be a participant in the compliance workflow, not a replacement for it.

He concluded, “Governance frameworks must shift from periodic model reviews to continuous, real-time monitoring against golden datasets to ensure that as the machine learns, it remains aligned with the firm’s specific risk appetite and regulatory obligations.”

Keep up with all the latest RegTech news here 

Copyright © 2026 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.