How purpose-built AI is transforming compliance from detective controls into preventive controls

As AI rewrites the playbook for financial crime and conduct risk, Behavox founder and CEO Erkin Adylov explains how purpose-built models are shifting compliance from fragmented, detective controls to unified, preventive ones.

Q1. Behavox is best known for communications surveillance. Why are you evolving into an endto-end controls platform, and why now?

A: Because our customers pushed us there. Over the past three years, we’ve proven that AI can transform communications surveillance: four to five times more true positives, far fewer false positives, and better outcomes with regulators. Once that became the baseline, customers started asking a very simple question: “If AI can fix comms surveillance, why stop there?”

They don’t want one system and one story for comms, a different one for trades, another for archiving, and a separate stack for policies. They want fewer systems, simpler controls, and a single narrative they can explain to regulators, boards, and investors.

So evolving into an end-to-end controls platform is not a distraction from surveillance; it’s the logical next step. We’re taking the same AI stack, the same data layer, and the same governance discipline and applying them to trade surveillance, record-keeping, and policy management. The benefit to customers is a stronger overall control framework and fewer fragmented tools to defend in exams, investigations, or enforcement discussions.

Q2. Many institutions still struggle with fragmented systems and data silos. How does unified data and unified logic change what’s possible for the control framework?

A: You can’t build serious, defensible controls on top of fragmented data and inconsistent rules.

“In most institutions, comms surveillance, trade surveillance, archiving, and policy management all sit in different systems.”

The same risk—MNPI misuse, market abuse, conflicts of interest—is defined separately in each tool. When something happens, teams stitch together chats, emails, voice, trades, and records by hand. It’s slow, expensive, and error-prone.

At one large “megabank”, they had exactly this problem: a patchwork of legacy systems that worked in individual regions but never scaled globally. Regional variations, incompatible data models, and poor integration meant they could not get a single, consistent view of risk or controls.

With a unified controls platform, you define a risk once and apply it consistently across surveillance, archive, and policy.

Behavox does this through AI Risk Policies—machinereadable policies that encode regulatory obligations and internal standards directly into the system. When a scenario fires, the platform can automatically pull in the relevant communications, trades, records, and policy references into a single case. In a hedge fund context, that might mean seeing a PM’s wall crossings, research interactions, trade blotter, and messages with the street in one place.

Once you have that integrated view, you can see which policies are effective, where gaps exist, and what needs to change. Unified data and logic turn the control framework from a patchwork into something coherent, explainable, and much easier to defend.

Q3. Where do you see AI having the biggest impact in trade surveillance, and how does Polaris illustrate that?

A: In trade surveillance, AI has two very concrete jobs: context and speed of deployment.

The first is context. A single trade alert on its own never telsl the whole story. With Polaris, we use agentic AI workflows to pull in contextual data automatically: related chats and emails, voice, control room communications, news, and corporate events. The AI reviews that bundle and helps decide whether something looks like a genuine issue or an obvious false positive.

That’s exactly what human surveillance teams do today, but they do it manually and slowly. Very often, closing an alert is as simple as matching a trade to a wall-crossing record or a news timestamp. AI can do that work at scale, and even close obviously benign alerts on its own, so humans focus on QA and judgement, not chasing basic context.

The second is onboarding and integration. Today, getting a trade surveillance system live can take months or even years, largely because of painstaking field mapping— aligning each firm’s data model to the vendor’s schema. It’s labour-intensive, repetitive work.

We believe AI should do that. With Polaris, AI helps infer and map different data types and sources from the client environment into the schema needed to generate alerts. We already see that AI can perform this mapping work quickly and accurately, while humans review and confirm. That shifts the model from “armies of people building integrations” to “AI does the heavy lifting, experts check and approve”, which is faster, cheaper, and much easier to scale across desks, entities, and regions.

Q4. Once AI has improved surveillance and reduced noise, why do you see preventive controls as the natural next step?

A: The first step was to make detective controls actually effective.

When you move from lexicon-based systems to purposebuilt AI, you get four to five times more true positives and dramatically less noise. You start to see patterns clearly: which desks, PMs, products, channels, and behaviours generate real risk. A UK-based hedge fund CCO put it very directly to us:

“Regulators like that we use AI and Behavox, but our investors love it — because it shows we’re serious about identifying risk and safeguarding their capital.”

That’s the value of investing in high-quality AI detection: regulators see stronger controls, and investors see a firm that takes risk seriously.

Once you have that level of visibility, the question changes from “Can we spot problems?” to “Why are we only reacting?” Preventive controls are the next layer. You use what AI is seeing in surveillance and trading to inform how policies are written, how attestations are structured, how the first line is supervised, and what gets escalated. Policies stop being static documents; they become part of a closed loop between obligations, behaviour, and outcomes.

“You still need strong detective controls and evidence, but the emphasis shifts from catching issues late to designing the environment so they are less likely to happen in the first place.”

Q5. Many vendors are partnering with big LLM providers and “adding Copilot” to their systems. Why did Behavox choose to build its own LLMs, and what does “purpose-built AI for controls” actually mean?

A: Because in our world, the AI isn’t a convenience feature—it is the control.

Most “Copilot-style” integrations focus on nice-to-have capabilities: summarising, searching, drafting. Those can be useful, but they’re not what regulators care about. What matters is the AI that actually drives detection, prioritisation, escalation, and case outcomes. That has to be engineered like any other critical control.

We built our own LLMs and AI stack for three reasons.

First, governance and stability. If you rely purely on a generic LLM API, you don’t control the training data, the update cycle, or the underlying behaviour. That’s a problem when model risk, internal audit, or regulators ask for documentation, reproducibility, and change control. With our own models, and with AI Risk Policies on top, we can show exactly how the model is configured, what it is optimised for, and how it has changed over time.

Second, fitness for purpose. Our models are trained and tuned on conduct risk, market abuse, regulatory language, and control workflows—not on general internet text. They’re built to spot specific patterns in trades and communications, link those patterns to policies, and generate evidence you can stand behind in an investigation or exam.

Third, track record. We’ve invested over $200m in R&D and have had AI in production for three years across more than 100 institutions, including a central bank and a regulator as customers. In that time, our models and surrounding processes have gone through internal audit, model validation, monitors, and regulators. That is exactly the kind of scrutiny you want if the AI is part of your control framework.

“There are places where generic LLMs can add value at the edge, but for the core of the controls stack we believe you need purpose-built, owned, and governable AI, not a black box you rent by the token.”

Q6. What are the main risks of relying on generic LLM tools for compliance, and how should firms think about trusting AI when it is part of their control environment?

A: The main risk is mistaking a general-purpose assistant for a governed control.

Generic LLMs are extraordinary tools, but they aren’t optimised for your risks, your regulations, or your control framework. If you ask a generic model to “help with compliance”, you might get clever answers, but when a regulator or model risk committee asks why it produced a particular output, you may not have a defensible explanation.

Governance is another issue. You often have limited visibility into the model’s training data or update cycle. That doesn’t align with the documentation, stability, and change-control expectations placed on critical controls. We’ve even seen generic LLMs give different answers to the same conduct scenario when the wording changes slightly. That’s unacceptable if your name is on the CCO attestation.

In terms of trust, I think firms should look at three things: evidence, engineering, and transparency.

• Evidence: Is the AI already in production in environments like yours? Has it survived internal audit, model validation, and regulatory scrutiny?

• Engineering: Is the stack built specifically for controls, with clear configuration, logging, and AI Risk Policies that tie behaviour back to obligations and risks?

• Transparency: Can you document how it works, challenge its behaviour, and explain it to a regulator?

That’s how we’ve built Behavox: three years of production AI, a stack designed specifically for controls, and a way of working with customers that makes the AI feel like a well-understood part of the control framework, not a black box.

Q7. Looking ahead, how will AI reshape compliance over the next few years, and what role do you see Behavox playing?

A: I see three big shifts.

First, full-population coverage. AI makes it realistic to monitor all relevant employees, channels, and languages. We already have customers using Behavox to monitor communications in 15 languages across more than 70,000 employees, and many are moving toward full coverage because it’s now affordable and effective.

Second, integrated controls. Instead of isolated tools for surveillance, archiving, and policy management, firms will move to coherent platforms where data, logic, and evidence are shared. That makes it easier to respond to exams, defend decisions, and explain the control framework to boards, regulators, and investors.

Third, prevention. As AI-driven detection gets better, the real value shifts to using those insights to design better policies, better first-line controls, and better training. The line between “monitoring” and “prevention” will blur.

Behavox’s role is to be the partner that makes that transition safe and credible: the most compliant AI stack in the industry, the most integrated controls platform, and a roadmap driven by what our customers need to solve next, not by marketing trends. Ultimately, we want firms to have effective, efficient, defensible controls so they can focus on running the business and generating returns.

 

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.