Why compliance needs humans in the AI loop

compliance

Compliance teams across the financial sector are facing an unprecedented challenge. With staff expected to review an avalanche of emails, instant messages, and chats each day, the volume has reached levels no human workforce can manage alone.

At the same time, regulators have not eased their expectations, with supervisory requirements becoming more stringent than ever, claims Saifr.

This has pushed firms to lean on artificial intelligence (AI) to help with surveillance. But a divide has emerged—some organisations have chosen to rely almost exclusively on AI, while others are avoiding it altogether due to regulatory uncertainty. Both extremes, however, risk exposing firms to compliance failures. The more effective solution lies in a balanced approach that blends AI’s data-processing power with human judgment.

The limitations of automation become clear when dealing with nuance. AI can flag thousands of potentially problematic keywords in moments, but it often lacks the ability to understand context. For example, references to “confidential estate planning” could be misinterpreted as illicit activity, despite being part of legitimate client conversations. FINRA Rule 3110 reinforces this, emphasising the need for supervisory systems that are “reasonably designed,” which means tools must align with both regulatory obligations and human interpretation.

Real-world examples highlight why this balance matters. An AI system may flag mentions of “gift cards” as possible policy violations. Yet, a human reviewer could see these discussions were about approved holiday bonuses for clients—fully compliant and documented. Without human oversight, firms risk both false positives and missed context.

Forward-looking institutions are implementing surveillance frameworks where AI takes the lead in scanning communications across channels, identifying patterns, and prioritising risks. These systems can flag anomalies in account opening procedures, cash transaction thresholds, or conversations involving high-risk jurisdictions. However, compliance officers remain the ultimate decision-makers, investigating alerts, applying firm-specific policies, and escalating cases when required.

The integration of anti-money laundering (AML) and know your customer (KYC) checks into surveillance makes this collaboration even more vital. AI can surface references to politically exposed persons (PEPs), beneficial ownership, or sanctions lists. But humans must determine whether flagged terms represent genuine risk or harmless context. Similarly, in marketing compliance—an area governed by FINRA Rule 2210—AI can quickly screen for missing disclosures, while humans ensure claims are fair, balanced, and aligned with firm activities.

Firms must also tackle the ethical risks of relying too heavily on AI. Systems trained on biased or incomplete data may lead to unfair outcomes or overlooked risks. To counter this, leading organisations are implementing clear documentation of AI decision-making, ongoing system testing, detailed audit trails, and mandatory human review of every AI alert. Transparency and accountability remain central to building regulatory trust.

A practical roadmap for success includes assigning governance ownership for AI oversight, training compliance teams, documenting audit trails, testing systems regularly, and maintaining escalation procedures. Importantly, AI should never replace human standards but rather augment them.

The benefits of getting this balance right are clear. Firms can scale operations without inflating costs, reduce false positives, and reassure regulators that their surveillance programmes combine technological efficiency with human oversight. This frees compliance officers to focus on strategic work rather than repetitive manual checks.

As one compliance director summed it up: “AI doesn’t replace our judgment—it amplifies it.” Looking ahead, regulators and industry leaders agree that the most defensible surveillance models will be those that harness the best of both worlds—AI for scale and speed, and humans for judgment and accountability.

By maintaining a human-in-the-loop approach, financial institutions can achieve surveillance programmes that are both regulator-approved and operationally sustainable. This is not about choosing between AI or humans, but about creating a partnership that delivers compliance outcomes no single approach could achieve on its own.

Read the daily RegTech news

Copyright © 2025 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.