AI adoption is accelerating across financial services and other regulated industries, but governance is struggling to keep up as firms scale usage faster than their controls.
According to Theta Lake, its annual Digital Communications Governance Report found that 99% of firms plan to expand AI usage, while 88% are already experiencing AI governance and data security challenges—creating a growing tension between innovation and compliance.
Regulatory expectations for AI tools are becoming clearer, and they largely mirror the requirements already familiar to compliance teams overseeing digital communications. Firms are expected to conduct ongoing monitoring of prompts, responses, and outputs to confirm AI tools perform as expected and result in compliant behaviour, while also assessing the risks of using AI and updating policies, procedures, controls, and systems in line with applicable requirements.
Crucially, responsibility does not shift simply because a message was created by an AI system rather than a human. AI-generated communications are still communications, and firms remain accountable for what is said, shared, or produced—regardless of the author. In practice, this means that monitoring capability is increasingly central not only to day-to-day supervision, but also to investigation at scale when issues emerge.
The case for monitoring is not only about satisfying regulatory guidance, but about proving that AI tools continue to behave as expected in real environments. To detect problems early and demonstrate control, organisations need visibility into AI prompts and outputs, supported by review processes that can identify emerging risk patterns rather than relying on ad-hoc sampling.
Monitoring also becomes the foundation for improving AI behaviour over time. Regulators are signalling the importance of continuous evaluation and refinement, but firms cannot refine supervisory processes, adjust controls, or improve reliability and accuracy without seeing what systems are actually being asked—and what they are actually producing. Without that real-world visibility, organisations may miss performance issues until they become incidents.
Another pressure point is retention. AI interaction data may already be captured by enterprise observability and security tools, but often in ways that are not aligned to structured supervisory frameworks. That can create regulatory exposure, particularly if prompts and outputs are stored broadly without governance. Monitoring helps organisations determine not only what should be retained for oversight and auditability, but also what should not be retained, enabling more intentional control over data lifecycle and risk.
Traditional monitoring approaches struggle in this new environment. Keyword lists are not designed to supervise AI behaviour, and siloed review queues are poorly suited to detecting systemic drift across models, channels, and use cases. Fragmented capture makes accountability harder to evidence, and as AI increases communications volume, compliance teams do not need more alerts—they need meaningful signals that can be investigated quickly and defensibly.
Legacy surveillance was not built for high-velocity AI interactions, multimodal communications, cross-platform correlation, prompt-level inspection, or contextual replay. As a result, monitoring increasingly needs to be AI-native, unified, and context-aware to meet both operational reality and regulatory expectations.
Theta Lake positions its communications monitoring approach around that modern risk landscape, combining full-fidelity capture—including AI prompts and outputs directly from system APIs—with AI-native risk detection across modalities and unified oversight across voice, chat, video, and AI-generated content. The company says its platform enables organisations to ingest, normalise, correlate, and enrich high-volume communications data while supporting observability, reconciliation, and forensic-level investigations, with the aim of helping compliance teams detect real risk, reduce noise, improve AI behaviour over time, control retention intentionally, and maintain regulatory confidence.
It also points to ISO/IEC 42001 certification as a signal of commitment to responsible AI management, aligning its surveillance and governance capabilities with emerging expectations around transparency, accountability, and future-ready controls.
Copyright © 2026 RegTech Analyst
Copyright © 2018 RegTech Analyst





