As financial institutions turn to AI to automate compliance, a key question arises: do we truly understand these systems’ decisions? The black-box nature of many models challenges transparency and trust. Explainable AI could change that, offering clarity around how algorithms reach conclusions. If successful, it might be the missing link that makes AI in compliance truly accountable. Key industry thought leaders examined this question in the first part of a two-part series.
As AI systems become increasingly embedded in financial compliance operations, a critical tension has emerged in both financial compliance operations and the sophistication of AI models and the fundamental regulatory requirements that govern their use, claims Oisin Boydell, chief data officer at Corlytics.
He said, “The question facing compliance professionals today is not whether AI can support regulatory obligations, but whether it can do so in a manner that satisfies the transparency and accountability standards that regulators demand.”
For Boydell, auditability, attestation, traceability and transparency form the cornerstone of effective regulatory compliance. These principles, he claims, enable firms to demonstrate adherence to regulatory frameworks and provide regulators with the assurance that compliance decisions are sound, defensible and properly documented.
“However, as AI increasingly supports and, in some cases, replaces human decision-making in compliance management, the spotlight has shifted to a more complex challenge: ensuring that AI-driven compliance decisions can meet these same standards of transparency and explainability,” Boydell commented.
Boydell also stressed that advanced AI models – particularly LLMs and deep learning systems – present a fundamental paradox.
He explained, “As these systems become more capable and sophisticated, their internal decision-making processes become increasingly opaque—even to the AI scientists and model developers who created them. These “black box” models can deliver impressive performance but understanding precisely how they arrive at specific conclusions remains a significant challenge.”
Such opacity, Boydell underlines, creates a critical issue for regulated industries. Financial institutions must document and justify AI-driven decisions to regulators, ensuring that processes are understandable and auditable.
“Yet the very characteristics that make advanced AI models powerful—their ability to identify complex, non-linear patterns across vast datasets—also make them difficult to interpret in ways that humans can readily understand and validate,” he said.
However, there remains a key explainability challenge. “Explainable AI techniques aim to bridge this gap by providing insights into how AI models reach their conclusions. In theory, XAI enables organizations to trace the logic behind each prediction, identify potential biases or errors, and build trust among stakeholders. However, the field of explainable AI remains an emerging research area, and the challenge is far from solved,” said Boydell.
A key area discussed around explainable has been the need to consider the human-in-the-loop approach. Relying solely of XAI techniques may not provide the level of transparency that regulators require or the level of trust that compliance professionals need when making critical decisions, stated Boydell.
“Understanding how an AI model functions internally may not even be informative or possible in many cases,” Boydell added. “The solution lies not in making AI completely transparent at the algorithmic level, but in integrating AI within human workflows that enable effective oversight.”
The Corlytics CDO remarked that a human-in-the-loop approach embeds human oversight within AI-driven processes, enabling compliance professionals to verify AI decisions by providing them with key information, relevant details, and the full context of compliance determinations.
“Rather than attempting to explain the internal mechanics of complex models, this approach focuses on giving compliance professionals the tools and information they need to validate AI outputs efficiently,” said Boydell.
He went on, “This partnership model harnesses the strengths of automation—speed, consistency, and the ability to process large volumes of data—while preserving the nuanced judgment and accountability that trained professionals provide. By using AI across the full regulatory lifecycle, while retaining human oversight for final decisions, organizations can build trust in AI-based regulatory compliance solutions through verified outcomes rather than algorithmic transparency alone.”
While AI models may function as black boxes, compliance operations cannot. Transparency and the ability for compliance professionals to understand, trust, and verify AI-generated decisions remain critical, particularly as regulators demand documentation and justification of automated processes.
Boydell said, “At Corlytics, we embed this human-in-the-loop approach across our AI-driven compliance solution, from horizon scanning, regulatory change management, through obligations and requirements analysis, and policies and controls managements and the mapping and connections between all these components. We combine AI decision making with human focused workflows to leverage AI efficiencies whilst supporting this critical human oversight.”
Through incorporating AI within human-in-the-loop workflows that enable oversight of key decisions offer a practical solution to the explainability challenge, detailed Boydell.
“This approach acknowledges the limitations of current XAI techniques while still meeting regulatory requirements for transparency, auditability, and accountability. As AI adoption in compliance continues to grow, building trust through verified, human-supervised processes is essential for managing the complex and highly fluid regulatory landscape and enabling a trusted, compliant future across regulated industries,” he said.
Changing the game
In the view of b-next, AI has changed the way compliance teams operate. It processes vast amounts of data, detects patterns of suspicious behavior, and highlights risks that would otherwise go unnoticed. But as automation becomes more common, so does the question of trust, stressed the firm.
“Can compliance teams, regulators, and clients truly understand and rely on the decisions an algorithm makes?,” said b-next. “That is where XAI comes in. It promises to open the black box of machine learning and show how conclusions are reached. In an industry built on accountability and evidence, this kind of transparency is no longer optional it is essential.”
b-next believes clarity in automated compliance is key. “Most AI systems are designed for performance, not explanation. They flag anomalies and assign risk scores but often fail to communicate why something was flagged. In compliance, that lack of clarity is a serious issue. Every alert has consequences. It can lead to an investigation, a trading restriction, or even a report to regulators.”
For the company, XAI changes this dynamic by showing which variables influenced a model’s decision. It connects patterns, data points, and logic into a narrative that humans can understand. Instead of simply trusting the system, compliance officers can see and verify its reasoning. This makes AI a partner, not a mystery, the firm claims.
b-next added, “When teams can interpret automated outcomes, they can act faster, explain findings internally, and stand behind their conclusions with confidence.”
According to the firm, every compliance professional knows that regulatory audits can be demanding. When auditors or regulators review surveillance systems, they are not only interested in what was detected but how it was detected. They want to ensure that logic, data, and governance are sound.
“Explainable AI can simplify that process,” stressed b-next. “When systems generate human-readable explanations, firms can demonstrate the inner workings of their algorithms without needing complex technical interpretations. This cuts down on audit time, reduces miscommunication, and increases regulator confidence in the technology being used. Essentially, explainable AI allows compliance to be both more efficient and more defensible.”
For b-next, one of the biggest challenges in explainability is balancing openness with the need to protect proprietary models. “Firms want to show how their systems reach conclusions, but they do not want to expose the algorithms themselves.
“Layered explainability provides a solution” the firm continued, “It allows organizations to share understandable summaries of model logic, such as which factors had the greatest influence on a decision, without revealing the technical details of the model’s design. This achieves transparency without giving away trade secrets, ensuring compliance teams and regulators have what they need while innovation remains protected.”
In summary, b-next believes that XAI is not a passing trend.
The firm explained, “It represents a necessary evolution in how compliance technology operates. The ability to interpret and justify automated decisions will soon become a regulatory expectation, not an advantage. More importantly, it is a step toward rebuilding trust in the relationship between humans and machines. Compliance officers can move from asking “Why did the system do this?” to confidently saying “Here’s why the system made this decision.”
“In an environment where accountability is everything, XAI might just be the missing link that connects automation with understanding, efficiency with transparency, and technology with human judgment,” the company concluded.
The critical bridge
In the view of RegTech firm Vivox.ai, in a financial ecosystem increasingly reliant on automation, XAI has emerged as the critical bridge between innovation and accountability.
A Vivox spokesperson said, “As regulators sharpen their focus on how AI decisions affect customers and compliance outcomes, the ability to show how a model reaches its conclusions is no longer a nice-to-have—it’s a regulatory necessity.”
The company gave the example of the EU AI Act, which makes this explicit. It explained, “Under the new regime, financial institutions deploying high-risk AI systems—such as those used in AML or KYB checks—must ensure their models are transparent, traceable, and auditable. The emphasis is on human oversight and explainability, ensuring that decisions impacting access to financial services can be reviewed and justified.”
In a similar vein, the UK’s FCA has been advancing its stance on AI assurance, focusing on model risk management, fairness and governance.
“Its guidance underscores that assurance must be “proportionate, evidence-based, and explainable”—a principle that resonates strongly with the compliance community,” said Vivox.
Meanwhile, for FinTechs, this shift isn’t theoretical. Vivox stressed a real-world example, a European FinTech unicorn that recently adopted Vivox’s AI KYB agent to automate the onboarding of corporate clients.
“As part of its governance process, the company implemented a tapered review framework for model output validation—an approach aligned with regulatory expectations for human-in-the-loop assurance,” said Vivox.
Explaining the process, Vivox said that in the early phase, 100% of AI-generated KYB assessments were manually reviewed during the first four weeks. By weeks 4-6, after incorporating customisations to reflect the FinTech’s internal policies, reviewers again validated nearly all outputs against the baseline.
“Confidence grew as results consistently matched human reviewers’ expectations. Over subsequent weeks, the sample size of manual checks decreased—to 70–80% by Week 8 and 50–60% by Week 12—transitioning toward a sampling-based degradation-monitoring model,” said Vivox.
However, the company detailed that something unexpected happened – which was that the rollout accelerated.
It explained, “Thanks to the agent’s high accuracy and transparency, the fintech compressed what was planned as a 12-week phased launch into just four weeks from technical discovery to production. Explainability didn’t slow adoption—it enabled it, by giving risk teams confidence that model decisions could be understood, audited, and defended.”
XAI also plays a crucial role in meeting the GDPR’s ‘right to explanation obligations, said Vivox.
“When automated systems make decisions about customers, institutions must be able to articulate the logic behind those decisions. For compliance functions, this capability simplifies internal investigations and reduces audit burdens—regulators can see exactly how a decision was reached without demanding opaque technical justifications,” the firm exclaimed.
Vivox concluded by highlighting that XAI provides a pragmatic balance between compliance transparency and proprietary model protection. “Modern explainability techniques—such as decision trace visualisation and confidence attribution—allow firms to disclose reasoning without revealing sensitive intellectual property,” said Vivox.
The company finished, “As the regulatory perimeter expands, explainability will likely become a differentiator, not a constraint. Firms that can both comply and clarify—showing regulators, auditors, and customers that their AI works as intended—will move faster and build greater trust.In that sense, explainable AI isn’t just the missing link in compliance. It’s the foundation of the next era of responsible financial automation.”
The power of explainability
According to Baran Ozkan, co-founder and CEO of Flagright, explainability turns an automated decision from a black box into an auditable story.
He remarked, “When a model can show which signals mattered, how they combined, and what alternatives would have changed the outcome, regulators and customers can see that the result was reasoned, not arbitrary. That transparency supports core duties under modern privacy laws, including the need to inform people about automated decisions, offer a meaningful way to contest them, and prove human oversight where required.
Ozkan added that it also shortens audits. “If every alert carries a reason code, feature attributions, the data lineage behind those features, and a clear control that was triggered, examiners spend less time chasing spreadsheets and more time validating outcomes,” he said.
However, Ozkan also added that the hard part here is balancing openness with protection of proprietary models, stating that the practical approach is layered disclosure.
“Firms keep weights and architecture private, while exposing regulator‑grade artifacts such as reason codes, surrogate explanations that are faithful within a defined window, counterfactual examples, and signed decision logs. That gives supervisors what they need to test fairness and consistency without forcing full model handover. At Flagright we design for explainability by default,” he said.
Detailing the Flagright offering, Ozkan mentioned that every score ships with human-readable rationales, immutable evidence, and a simulator that shows how different facts would have changed the decision.
“The goal is simple: speed for operations, clarity for auditors, and recourse for customers,” he concluded.
Ryan Swann, CEO and founder of RiskSmart, succinctly outlined a key benefit of explainable AI.
“If you can clearly say “here’s why this decision happened,” customers understand it, teams can challenge it, and audits go smoother. Keep it simple (a core value at RiskSmart). Plain‑English summary, clear reasons for the specific case, and a short pack of proof you can show a regulator. Be honest about limits and test that your explanations match what the system really did,” he remarked.
Copyright © 2025 RegTech Analyst
Copyright © 2018 RegTech Analyst





