Automation was meant to make compliance cleaner, with faster decisions, consistent outcomes, and fewer human errors, but it has made ownership far less clear. When outcomes are shaped by data, models, vendors, and controls, responsibility becomes blurred, even as regulators continue to hold firms fully accountable.
Expectations have not shifted, every decision must still be explained, defended, and owned. This creates a fault line at the heart of modern compliance, decision making is distributed, but accountability is not.
The firms that stand out are those that can draw a clear, defensible line from automated output back to human responsibility.
In the second part of a two-part series on this topic, we speak to key industry thought leaders to ask who owns decisions in the age of automated compliance.
How firms define accountability
How are businesses defining accountability when compliance decisions are partially or fully automated?
Rich Kent, CTO at Taina Technology, believes that automation has firmly established itself within regulatory environments. “From document extraction tools to AI-assisted classification engines, firms are increasingly relying on technology to handle high volumes of complex data” he said. He added that efficiency has improved and consistency has been bolstered. Despite this, who remains accountable when machines help make a decision is a question that hasn’t been fully answered.
Kent makes an attempt to answer it, stating, “The short answer is reassuringly traditional. Accountability has always sat and still sits with the institution — and with clearly defined human roles within it. Regulators have made it clear that while automation is welcome, responsibility is not transferable to software.”
He added, “In practice, firms are defining accountability across layers. Compliance leaders retain ownership of the due diligence framework. Policy teams define and approve the rules that automation follows. Technology teams manage system performance and controls. Risk and audit functions provide independent oversight. The “human in the loop” model is increasingly common: automation may process and recommend, but people review, monitor, and remain answerable.”
Kent also stated that documentation plays a vital role in any automated system. Leading institutions maintain clear mappings between automated logic and regulatory rules, preserve decision audit trails and implement structured change management.
He explained, “If a regulator asks, “Why was this entity classified as a Financial Institution?” firms must be able to explain the reasoning — not simply point to an algorithm.
“Ultimately, automation has not diluted accountability; it has refined it. The focus has shifted from “who reviewed this file?” to “who governs the system that reviews these files?” In the FATCA and CRS world, technology may assist with the heavy lifting, but human stewardship remains firmly in charge — just as regulators would expect.”
Most firms are not “redefining accountability” in any meaningful way. As Mike Lubansky, SVP, Strategy, at Red Oak explains, they are “embedding automation into existing supervisory frameworks” and clarifying where oversight sits within those workflows. The real shift is in how accountability is understood. It no longer rests solely with “who clicked approve,” but extends to the individuals responsible for configuring, supervising, and validating the system that produced the outcome. In other words, ownership moves upstream, away from single decisions and into the design and control of the process itself.
That only holds up if it is properly documented. Firms need clear answers to a set of fundamental questions: “Who approved the automation use case, what decisions are eligible for automation, what thresholds or confidence levels apply, when and how human review is triggered, and how decisions are logged and retained.” Without that level of clarity, automation risks creating gaps rather than efficiencies.
As Lubansky puts it, the key to defensibility is “treating automation as part of a structured supervisory workflow” where roles, escalation paths, and audit trails are deliberately engineered rather than assumed. The firms that get this right are not relying on automation to simplify accountability, they are designing their governance so that accountability remains clear under scrutiny.
Accountability, as Supradeep Appikonda, COO and Co-Founder, 4CRisk.ai makes clear, does not move just because the process does. It “resides with the organization and cannot be outsourced to the automated process or AI.” That principle sounds obvious, but in practice it is where many firms lose discipline. Automation can create the impression that decisions are being handled, when in reality the responsibility to understand and stand behind those outcomes remains firmly with people.
To counter that, firms are placing human in the loop reviews at the right step in the process, and reinforcing them with audits, KPIs, and analytics that surface weaknesses early. Tools like RACI matrices are becoming more common, not as a formality, but as a way to ensure there is no ambiguity around who is responsible for what. The goal is not just oversight, but structured accountability that holds up under pressure.
The risk, as Appikonda points out, is that professionals can be “lulled into a false sense of security with automation,” trusting machine output with only light review. That is where problems. A small flaw in logic can be amplified across millions of transactions in seconds, especially as customer behaviour shifts or new edge cases emerge. In those moments, automation moves into more subjective territory, where intent matters and rules alone are not enough. As he puts it, this is where “humans need to be involved,” because even if guidelines are not technically breached, the impact at scale can be significant.
Where responsibility sits
Automation may be “a trusted ally in regulatory decision making,” as Rich Kent puts it, but the moment a regulator challenges an outcome, the lines sharpen quickly. “Responsibility does not sit with the software.” Regulators are not interested in the mechanics of the tool in isolation; they want to understand how the firm governs it. As he puts it plainly, “regulators do not supervise algorithms, they supervise institutions.”
That has real implications for where accountability sits. When decisions are challenged, it rests with compliance and senior management, and the presence of automation does not dilute that responsibility, it raises expectations.
Firms need to explain how the system works, which rules it applies, who approved them, and how performance is monitored over time. In well governed organisations, that responsibility is structured across layers, from policy teams interpreting regulation, to technology teams implementing logic, through to compliance leaders who remain ultimately accountable, with audit and risk functions providing assurance.
What regulators are really testing comes down to three things: “transparency, oversight, and control.” Firms must show how a decision was reached, prove that humans are actively monitoring outputs, and demonstrate they can step in, adjust, or override when needed.
The underlying message is hard to ignore. Automation may deliver decisions at speed and scale, but stewardship does not move with it. As Kent makes clear, every automated determination needs to be defensible, not just technically, but in a way that stands up under direct regulatory scrutiny, because when the questions come, it will not be the algorithm answering them.
From a regulatory standpoint, as Mike Lubansky puts it, “nothing has changed.” Responsibility still sits with the firm, the designated supervisory principal, and the documented supervisory system. But where regulators are focusing has evolved. They are looking beyond the outcome and into the governance behind it, asking not just what the system did, but whether automation has been embedded within a defensible supervisory structure.
In practice, that means responsibility is spread across three connected layers. There is “supervisory ownership,” the principal accountable for the compliance function. Then “governance ownership,” the group that approved and oversees the use of automation. And finally “operational monitoring,” the team responsible for ongoing testing, documentation, and escalation. Each layer plays a distinct role, but the system only holds together if those roles are clearly defined and connected.
The real risk is not automation itself, it is ambiguity. As Lubansky makes clear, when those layers are not mapped into a defined workflow, gaps appear quickly. “Automation without documented supervisory architecture creates exposure.” By contrast, when it is embedded within structured review and audit processes, it does the opposite, strengthening defensibility and giving firms a clear answer when regulators come calling.
Appikonda was succinct on this topic, “Responsibility sits with professionals, who need be able to explain to a regulator why a specific decision was made. That means the logic in the algorithm needs to be clear and understood by those using the automation. “
Sufficient human oversight
How much human oversight is considered sufficient in automated compliance workflows?
On this point, Lubasnky believes that there is no regulatory formula for “sufficient” oversight.
He explained, “Human involvement alone is not enough. Oversight is judged by whether it is risk-based, active, and documented. In many cases, targeted, risk-calibrated sampling with strong documentation is more defensible than blanket human review with weak traceability.”
He stressed that oversight is sufficient when a firm can reconstruct: why a decision was made, what logic or criteria were applied, who approved the framework, and what controls were in place at the time.
Meanwhile, Appikonda emphasised that human in the loop reviews will need to be optimized over time and adapted when buyer behaviours change.
He said, “Professionals need to actively interrogate results, which is possible when a co-pilot. Co-pilots invite human questioning rather than just requesting an approval. When the decision process appears faulty, professionals should have a clear line of escalation to a person with more expertise to weigh in on the result. If decisions are regularly being escalated it’s time to rework the automation and accompanying workflow to streamline the logic.”
When Kent considers this topic, and how much human oversight is enough – his answer is not all of it, but neither is it none.
He remarked, “Regulators are not expecting humans to manually reprocess every automated decision. That would defeat the purpose of automation. Instead, they expect firms to demonstrate proportionate and risk-based oversight. In practice, this usually means maintaining a clear “human in the loop” at key points: reviewing high-risk or low-confidence cases, approving changes to rules or models, and monitoring system performance over time.”
Sampling and quality assurance reviews are common. Kent gave an example, in that a percentage of automatically cleared low-risk cases may be reviewed periodically to confirm that outcomes remain accurate.
“Exception handling processes are also critical, ensuring that unusual or complex scenarios are escalated to experienced reviewers,” said Kent. “Oversight also extends beyond individual decisions. Firms are expected to monitor trends, validate automated logic against regulatory changes, and maintain audit trails that explain how outcomes were reached. The goal is not to second-guess the system at every turn, but to demonstrate that it operates within a governed framework.”
Ultimately for Kent, sufficient oversight is about confidence and defensibility. “If a regulator were to ask, “How do you know this automated process is working correctly?”, firms should be able to answer clearly and calmly. Automation can enhance compliance — but human stewardship remains the safeguard that keeps it on course.”
Governance frameworks: keeping pace?
On the question of whether governance frameworks are keeping pace with regulatory automation, Lubansky stressed that governance frameworks are evolving, but many were built for static, rules-based systems, not adaptive ones.
“Firms that treat AI as “just another software tool” often underestimate the supervisory lift required,” he said. “Forward-looking firms are responding by treating automation governance as a continuous process, not a one-time approval. They are formalizing automation approval committees, documenting decision eligibility criteria, requiring auditable logs for all automated actions, and embedding exception handling within structured workflows.”
Appikonda, on the other hand, detailed that governance frameworks, regulations and standards mature as regulators and other industry experts clarify risks associated with AI and the adoption of AI agents that may mask risks with over-automation.
He said, “Still, it’s the organizations themselves that must conduct due diligence to ensure the level of automation is suitable. It’s important to keep pace with vendor updates and features that while providing greater flexibility, may introduce more risk if guidelines are not nailed down.”
Kent stated that automation and technology, specifically AI technology, is reshaping compliance systems at an unprecedented pace. “Document review, classification checks, anomaly detection, and reporting workflows are increasingly supported — and in some cases driven — by technology.”
He underlined how efficiency is up, consistency is rising, and operational pressure is easing. “But as automation accelerates, a pressing question remains: are governance frameworks keeping pace? In many firms, the answer is “we’re getting there.”
He continued, “Historically, governance in tax due diligence focused on policy interpretation, manual review controls, and quality assurance sampling. Automation changes the shape of that oversight. Instead of supervising individual reviewers, firms must now supervise systems — including the logic, rules, and in some cases AI models that sit behind automated workflows.”
To manage this, leading institutions are responding by bolstering cross-functional governance. Compliance teams are retaining ownership of regulatory interpretation. Technology teams manage implementation and system performance. Risk and audit functions test automated controls just as rigorously as manual ones. Documentation has become more important than ever — mapping regulatory rules to system logic and maintaining clear audit trails for automated decisions.
Kent remarked, “However, maturity levels vary. In some organisations, automation has moved faster than governance design, creating temporary gaps in oversight clarity. Regulators are increasingly attentive to this, not to discourage innovation, but to ensure that responsibility remains visible and well defined.”
The encouraging news, as Kent outlines is that governance is evolving. “Many firms are adopting structured model oversight, formal change management for automated rules, and periodic validation of system outputs. Automation may be transforming how compliance work is executed, but governance — when thoughtfully adapted — ensures it remains accountable and defensible.”
Kent concluded, “In particular, authorities are waking up to the opportunities that AI technology brings to increase the level of automation. As technology is moving at an ever increasing pace, then governance must keep moving with it.”
Vital importance
As Areg Nzsdejan, CEO of Cardamon, notes, the ownership question around compliance decisions remains “one of the most important open questions in AI adoption.” For now, the position is relatively clear: AI vendors do not take liability for decisions, firms remain accountable, and named individuals still carry responsibility. If a regulator challenges an automated outcome, “the firm, not the AI provider, answers.” That baseline has not shifted, even as automation becomes more embedded in decision making.
Where it becomes more interesting is the direction of travel. Nzsdejan suggests this model may evolve, with AI native providers potentially taking on “limited liability for specific categories of decision making,” supported by insurance backed structures and contractual risk sharing. In that scenario, accountability could begin to look more distributed, closer to models seen in professional services, such as law firms standing behind advice.
At Cardamon, AI is framed as “digital teammates” that do the heavy lifting, make the first assessment, and structure the analysis, but do not replace the manager’s responsibility. The manager remains accountable. That leads to the real tension in the system: “how much review is required before trust becomes justified.” Trust may increase as systems mature, but as Nzsdejan makes clear, accountability will remain anchored to humans unless and until liability itself is contractually redefined.
A major opportunity
According to Kelvin Dickenson, CPO of StarCompliance, for employee compliance teams, AI presents a major opportunity.
He said, “As regulations grow more complex and data becomes harder to manage, AI can help identify risk patterns, monitor activity, and streamline enforcement. Its purpose is not to replace human expertise but to build upon it, strengthening decision-making and unlocking new possibilities through collaboration between people and technology.”
Dickenson explained that at StarCompliance, the firm strongly believes the future of compliance lies in combined intelligent technology with thoughtful governance and experienced professionals.
He said, “Star is deeply committed to learning how the industry is using AI today and exploring which aspects will drive its future adoption. That’s why we conducted the 2025 AI & Compliance Market Study, which found that over 60% of firms expect to adopt advanced AI tools by 2030. This projection is echoed by a 2024 Deloitte report, which found that nearly 70% of financial services leaders expect AI to play a central role in transforming compliance operations within the next three to five years, confirming the industry’s accelerating shift toward intelligent compliance solutions.
“These insights highlight the momentum building around AI in compliance. It reinforces our focus on supporting firms as they navigate this evolving landscape with clarity, confidence, and innovation.”
Ownership stays human
As Aiprise puts it, the position is simple: “automation can support compliance decisions, but it cannot own them.” Ownership still sits with a named human. AI systems, rules engines, and agents may act as “very fast analysts,” but they are ultimately there to propose, summarise, and route, not to carry responsibility. That distinction remains critical when regulators come knocking.
In practice, that is why strong programmes start with “policy first, automation second.” Firms define AML, KYC, and KYB rules in plain language, then translate them into system logic, so the technology is clearly implementing policy rather than quietly redefining it.
Every automated outcome must also be explainable. If an alert is cleared, there needs to be a clear “because,” showing which lists were checked, what matched, what score was produced, and which rule fired. If that cannot be surfaced quickly, it is a governance gap, not a technical detail.
Oversight, meanwhile, is not binary. It is risk based. Lower risk cases can be fully automated with sampling, while higher risk or more complex cases require human escalation, review, and documented rationale. When regulators challenge a decision, firms must be able to show the policy being applied, the data and checks used, the reasoning path from input to outcome, and where human oversight applied or was intentionally not required.
The underlying issue is that many governance frameworks have not kept pace. Teams are often automating faster than they are updating controls built for manual review. The organisations ahead of the curve are treating this as a shift in the control environment, not just a software upgrade, and training senior leaders to interrogate explainability, override capability, and failure modes more rigorously. The conclusion remains consistent: “humans still own compliance decisions,” even if automation carries most of the operational load.
Keep up with all the latest RegTech news here
Copyright © 2026 RegTech Analyst
Copyright © 2018 RegTech Analyst





