As AI becomes deeply embedded in compliance operations, one challenge continues to loom large: transparency. While machine-driven monitoring and decision-making promise speed and accuracy, many of these systems still operate as opaque black boxes — a problem for regulators and firms that must justify every outcome. The question now is whether greater transparency, powered by explainable AI, is the final hurdle standing between today’s automated tools and true, regulator-ready AI compliance.
For Areg Nzsdejan, CEO and founder of Cardamon, the conversation around AI in compliance often swings between two polls – efficiency and accountability.
“Everyone agrees automation can transform regulatory work, but few talk about what happens when an algorithm makes a decision that affects a client, a transaction, or even a regulator’s interpretation,” said Nzsdejan. “That’s where explainable AI comes in – and it may be the bridge between automation and genuine trust in compliance.”
For the Cardamon CEO, explainable AI brings visibility into how compliance decisions are made, not just what the outcome is. Instead of a simple black box that outputs ‘approved’ or flagged’, such a technology enables compliance teams to trace the lgoci behind each result – the data that was used, the reasoning behind the final call and the thresholds triggered.
He said, “This kind of transparency is something we’ve embedded across Cardamon’s regulatory intelligence engine. When the system maps an obligation or calculates residual risk, users can view the underlying rationale. It’s not just automation for speed’s sake; it’s automation you can stand behind, backed by traceable evidence and context.”
Currently under GDPR, individuals have the right to understand how automated decisions affect them. Nzsdejan outlined that explainable AI turns that principle into practice. “It can provide human-readable explanations of why a particular outcome occurred, what factors influenced it most, and what could have changed the result,” he said.
Nzsdejan went on, “In compliance, this matters deeply. Regulators and internal teams alike need to know that automated systems aren’t making arbitrary calls. By surfacing the reasoning behind each output, explainable AI transforms the “right to explanation” from a legal checkbox into a living, operational safeguard. At Cardamon, that’s reflected in how we design audit trails and decision logs – clear, interpretable, and regulator-ready.”
For the Cardamon CEO, explainability also reduces one of the biggest operational pains in compliance: the audit burden.” Instead of pulling scattered spreadsheets and recreating rationale months after the fact, explainable AI keeps a running record of what the model did and why,” he said.
In addition, he stressed that when a regulator asks why an alert was raised or an obligation was mapped, compliance teams can now provide a transparent, structured answer. This, he added, was about not about faster audits – but creating a compliance culture where every automated action is inherently defensible.
A key challenge that remains, despite this, is balancing transparency with intellectual property. Here, Nzsdejan detailed that firms need to show how their systems make decisions without exposing their algorithms or data pipelines.
“Explainable AI supports this through layered disclosure – offering meaningful logic and key factors while abstracting sensitive details. It’s the balance between openness and security, and it’s one every compliance technology provider needs to get right,” he stated.
Explainable AI isn’t just a future concept – it’s becoming, in the view of Nzsdejan, a practical necessity.
“It helps firms justify automated decisions, meet data protection obligations, and build credibility with regulators. At Cardamon, we think about it less as a feature and more as a principle: if an AI system can’t explain itself, it shouldn’t be making regulatory calls. The future of compliance will be defined not just by how fast automation moves, but by how clearly it can show its work,” he exclaimed.
A crucial aspect
According to South African RegTech firm RelyComply, considering the ever-increasing need of AI to keep up with even some base-level compliance shifts, many businesses have been investing to hone design and development expertise that can curb its poorer connotations for unethical practice, hallucinations and bias.
The firm added, “Now, XAI is becoming a crucial aspect in the regulatory field: transparency around how models are used, how they arrive at their generated outcomes, and how their capabilities are easily understood.”
This, the company claims, marks a stark difference to the black box algorithms that were only explainable unto themselves. “That alone cannot appease heavy AI restrictions that are tightening as our understanding of its discriminatory or harmful biases grows. Hereby, technical knowledge from data scientists and RegTech partners is paramount to not just ensure AI’s real-time data processing and segmentation is implemented, but made highly explainable and traceable for auditory purposes,” the business added.
XAI is not the only missing link for justifying the technology’s usage in financial crime investigation and detection, as it cannot work without the human element – accountability that a compliance function’s AML controls are carried out safely, securely and without risk of disclosing personal information, the enterprise added. XAI, it went on, can only be trained according to specific requirements through comprehensive model training at the hands of experts.
Inside a stricter regulatory framework, firms that are able to maintain XAI from the very beginning for their automations can then get a competitive advantage.
Development time, the firm said, can be met realistically by processing only the data intended for AML usage. Mapping a model’s level of explainability against risk factors sets up XAI in a safe way, and as a working baseline to be improved over time through regimented testing.
RelyComply concluded, “XAI is only a start to making our AML systems greater, and a way to protect the integrity of data usage that should benefit institutions, regulators and customers while AI’s technical capabilities for anti-fincrime grows.”
The core component of transparency
Explainability is a core component of transparency, which feeds directly into trustworthy AI. This is the view of Supradeep Appikonda, COO and co-founder of RegTech 4CRisk.ai.
Appikonda begins by emphasising how explainability strengthens day-to-day trust in AI-driven compliance work. As he puts it: “Explainability builds trust by showing the user the steps, sources and assumptions used to generate a response. Users can verify, collaborate with others on results, and revise accordingly. This is ‘Human in the Loop’ – where feedback is vital to ensure AI-generated results are reliable, accurate and build trust. Reviews from other team members can be accelerated, since SMEs can see structured evidence for why something was mapped, and defend it.”
He adds that this clarity doesn’t just help users — it reassures leadership too. “Explainability also increases stakeholder confidence; for example, compliance officers can defend and justify outputs to other stakeholders and regulators. For instance, in a compliance mapping scenario, explainable AI can show why a specific policy or control was mapped to a regulation, and how strongly it was matched — whether the requirement was fully met, partially supported, or only contextually related.”
For him, the impact is as much about risk reduction as it is about efficiency. “Overall, explainability reduces human bias and error by grounding each match in transparent reasoning rather than opaque judgment. It also improves model governance because the same explanations can be logged, versioned, and audited later. Without explainability, AI deployments risk rejection and ultimately, failure.”
Appikonda points out that the regulatory stakes are even higher in Europe, where GDPR’s requirements make explainability non-negotiable. “Since GDPR Recital 71 and Article 22 grant individuals the right to request human intervention, a review of the decision, and an explanation, in plain language, of the rationale behind an AI outcome or decisions that produce legal or significant effects.
“This means how an algorithm reached its decision — not by revealing AI model code, but by describing the main factors and rationale behind the outcome. Explainability is table stakes for companies that must be able to provide ‘meaningful information’ about the ‘logic involved’ and the significant factors and outcomes of the processing. That means context, sources, steps and assumptions, and more if required to clarify AI reasoning.”
This is why he frames explainable AI as central to compliance, not just a supporting feature. “Explainable AI helps meet GDPR’s right-to-explanation obligation by generating clear, interpretable explanations that regulators and affected individuals can understand and trust.”
Auditors, he notes, particularly benefit from this level of transparency. “Explainable AI helps auditors by being precise and specific on how outputs are derived and providing evidence and context to back up ratings and conclusions. For example, AI can show the policy and procedure that is violated with transactions and provide some details on the severity and consequences. The auditor, however, will always be needed to provide oversight and judgment since AI may not be able see beyond a specific set of transactions. This kind of analysis by the auditor is particularly necessary when a finding is logged, and the business, third parties or regulators need to dive more deeply into potential consequences, such as fines or MOUs.”
Appikonda stresses that the technology used under the hood matters just as much as the explanations produced. “One fundamental risk to consider: RegTech tools leveraging private, secure, small language models that are trained on a current, accurate and specialized risk and compliance corpus will be more trusted and, by design, minimize both bias and hallucinations.
“Those that rely on large language models are less likely to be accurate, especially when mining for risks and vulnerabilities that are outdated or only arise in a specific context that is not covered by the LLM. In addition, human oversight and judgment are critical to catching subtle bias and will never be automated fully.”
Appikonda finished, “ There are limits to explainable AI when it comes to protecting proprietary models and algorithms. The core principle is that transparency should be sufficient for the purpose – for example, a regulatory audit, or a user appeal, but not excessive enough to compromise trade secrets. That means specific algorithms, how data is fine-tuned, or the training process, are confidential trade secrets and as such, should be disclosed only to regulators under protected circumstances.”
Critical enabler
As regulated industries accelerate AI adoption, explainability has emerged as a critical enabler of trust, transparency, and compliance, claims Chris Reed, head of product and technology at Wordwatch.
He said,” Explainable AI helps uncover how automated systems reach decisions, addressing mounting pressure from regulators like GDPR’s “right to explanation”. Transparency is not optional, instead, it’s key to ensuring compliance, mitigating model bias, and supporting regulatory audits.”
Reed added the equally important is understanding the end-to-end flow of data into and through AI systems. Without visibility into what data is captured, how its processed, and where it resides. Reed states, organisations are risking breaches of data lineage, retention and access obligations. “Mapping data flows is the bedrock of defensible AI governance,” he said.
Furthermore, the Wordwatch tech head stated that deploying small language models on-prem with no access requirement to interact with public-facing services further strengthens regulatory posture.
He explained, “These models allow businesses to leverage AI while keeping sensitive communications and interaction data within their secure infrastructure, eliminating exposure to third-party clouds and reducing the likelihood of data leaks. On-prem AI models also ease regulator concerns over cross-border data transfers and uncontrolled inference risks.”
Through the combination of XAI with robust data governance and secure architectures, Reed stated that organisations can confidently modernise their compliance frameworks, reducing audit friction and balancing transparency with operational efficiency.
A balancing act
For Baran Ozkan, CEO of Flagright, explainability turns an automated decision from a black box into an auditable story.
“When a model can show which signals mattered, how they combined, and what alternatives would have changed the outcome, regulators and customers can see that the result was reasoned, not arbitrary,” he remarked.
That transparency, Ozkan claims, supports core duties under modern privacy laws, including the need to inform people about automated decisions, offer a meaningful way to contest them, and prove human oversight where required. It also shortens audits. “If every alert carries a reason code, feature attributions, the data lineage behind those features, and a clear control that was triggered, examiners spend less time chasing spreadsheets and more time validating outcomes,” detailed Ozkan.
However, for the Flagright head, the hardest part is balancing openness with the protection of proprietary models.
“The practical approach is layered disclosure. Firms keep weights and architecture private, while exposing regulator‑grade artifacts such as reason codes, surrogate explanations that are faithful within a defined window, counterfactual examples, and signed decision logs. That gives supervisors what they need to test fairness and consistency without forcing full model handover,” he stressed.
The Flagright founder finished by stating his company designs for explainability by default, with every score shipping with human‑readable rationales, immutable evidence, and a simulator that shows how different facts would have changed the decision.
He concluded, “The goal is simple: speed for operations, clarity for auditors, and recourse for customers.”
Keep up with all the latest RegTech news here
Copyright © 2025 RegTech Analyst
Copyright © 2018 RegTech Analyst





