Artificial intelligence is reshaping how organisations operate, and with that transformation comes an entirely new category of workplace interaction: AI communications, or “aiComms”.
According to Theta Lake, as firms accelerate their deployment of tools such as Microsoft Copilot, Zoom AI Companion, Anthropic’s Claude, and OpenAI’s ChatGPT, the communications produced by these systems are rapidly becoming one of the most complex and underappreciated governance challenges in enterprise security.
Theta Lake recently highlighted what exactly AI communications is, and why they are the last-mile in AI security.
AiComms refers to the layer of interaction created when employees engage with AI tools and large language models (LLMs). These are not passive or inconsequential exchanges. AI is drafting client emails, surfacing sensitive data in response to prompts, and summarising confidential meetings. Each of these outputs may carry compliance, ethical, and regulatory weight — yet most organisations are only beginning to understand how to manage them.
Research spanning 500 financial services firms highlights just how acute the problem has become. While nearly all (99%) are now deploying AI tools, 88% report significant challenges around AI governance and data security. The volume and complexity of communications being generated is growing at a pace that existing governance frameworks were never designed to handle. This gap has created what security practitioners are calling the “last mile” of AI security — the point at which human decision-making and machine-driven intelligence converge, and where most oversight strategies are currently falling short.
The risks involved span three broad domains. On the security and privacy side, 45% of firms say they struggle to detect whether confidential or sensitive data has been exposed through generative AI outputs. Personal information, credit card details, or confidential client data can surface in prompts or responses without triggering traditional data loss prevention tools. On compliance, 47% of organisations say they find it difficult to ensure AI-generated content meets regulatory requirements. Both FINRA’s 2026 Annual Regulatory Oversight Report and the UK’s Financial Conduct Authority have made clear that existing regulatory frameworks apply equally to AI-generated communications, placing the burden of capture, supervision, and archiving squarely on the firms deploying these tools. Behaviourally, 41% of organisations are already identifying new and risky user patterns, from jailbreaking — attempts to circumvent AI guardrails — to more subtle forms of prompt steering, where employees may seek to access information beyond their authorisation level through iterative queries.
Responsibility for managing these risks does not fall neatly on any single team. IT security functions tend to focus on network vulnerabilities and data protection but often lack the contextual visibility needed to detect nuanced user behaviours. Compliance teams, meanwhile, may be well-equipped to supervise regulated interactions but are typically ill-positioned to catch unethical AI use, such as an employee prompting a system to access confidential files without leaving an audit trail or probing for colleagues’ compensation data through a series of seemingly innocent queries.
This is the crux of the last-mile problem. Traditional security frameworks protect systems, networks, and data at an infrastructure level. But the last mile — where humans and AI interact in natural language — is where intent, context, and compliance actually converge. Guardrails alone are insufficient. Even well-configured AI systems can inadvertently expose personally identifiable information (PII), material non-public information (MNPI), or sensitive internal documents through ordinary user behaviour.
What organisations need is behavioural visibility: the ability to observe how users and AI systems interact, identify anomalies, and understand the full context across multiple tools and records. This means capturing AI interactions in their complete conversational context rather than as isolated prompt-and-response pairs, detecting patterns such as jailbreaking or unethical prompt steering, inspecting content for sensitive data exposure or potential misconduct, and being able to reconstruct conversations — including across related threads of chat, audio, and email — to ensure accuracy, completeness, and traceability.
Blocking access to AI tools is not a credible answer. Overblocking pushes employees towards unmonitored shadow IT environments and stifles the innovation these tools are meant to enable. Instead, organisations must invest in comprehensive oversight combining content inspection, behavioural analytics, and contextual supervision. Those that do will be best placed to unlock the full productivity value of AI — safely, responsibly, and with the confidence that they can demonstrate effective oversight to regulators when called upon to do so.
Securing the last mile of AI communications is not just a technical challenge. It is fast becoming a defining test of how seriously organisations take the governance of the AI systems they are deploying at scale.
Read the full Theta Lake post here.
Copyright © 2026 RegTech Analyst
Copyright © 2018 RegTech Analyst





