Five AI governance shifts financial firms must prepare for

governance

AI has moved from a background productivity aid to an active presence in day-to-day work, and organisations are discovering that governance has not kept pace with the speed of adoption.

As firms roll out AI assistants, meeting notetakers and increasingly agentic tools, the challenge is no longer whether AI delivers value, but whether its use can be controlled, supervised and explained, said Theta Lake.

This tension is reflected in findings from Theta Lake’s 7th Annual Digital Communications Governance Report, based on responses from 500 IT and compliance leaders. Almost all financial services firms surveyed plan to expand their use of AI, yet a significant majority are already encountering governance and data-security issues. For many organisations, these pressures are becoming a real brake on adoption rather than a theoretical risk. Establishing AI governance early is therefore emerging as a prerequisite for scaling AI safely across both regulated and non-regulated industries.

One of the defining shifts for 2026 is the way AI will be treated inside organisations. Rather than being viewed as a passive feature embedded in software, AI is increasingly acting as a participant in workplace communications. As AI becomes native to unified communications and collaboration platforms, a new class of interactions is emerging, often referred to as aiComms. These interactions are not fleeting or informal. AI is drafting client emails, summarising regulated meetings and responding dynamically to prompts that evolve over time. Oversight based on isolated prompts or outputs is no longer sufficient, and firms are being pushed to capture, supervise and archive AI-generated communications in full context, applying controls at the moment content is created.

Alongside this shift, governance attention is widening beyond what AI produces to how people interact with it. Human-to-AI and even AI-to-AI behaviours are introducing new risk vectors. Employees may attempt to bypass controls using techniques such as jailbreaking, while AI systems themselves can inadvertently expose personal data, client information or market-sensitive material. Without visibility into prompts, outputs and interaction patterns, these risks can remain hidden. Effective governance in 2026 will depend on inspection capabilities that can identify unsafe behaviour, detect sensitive data exposure and flag the use of unsanctioned tools before issues escalate.

Trust in AI vendors is also changing. As AI adoption accelerates, organisations are becoming wary of broad assurances and marketing claims. Instead, demand is growing for independently verifiable governance standards.

Certification under frameworks such as ISO/IEC 42001 is gaining traction as a way for vendors to demonstrate mature AI management systems and responsible development practices. These standards align closely with emerging regulation, including the EU AI Act, and provide boards, clients and regulators with tangible evidence that AI is being deployed with appropriate controls.

Regulatory scrutiny is expected to intensify further in 2026. Supervisors have been clear that accountability does not change simply because AI is involved. Communications generated by AI are subject to the same expectations as those produced by humans.

Regulators in both the US and the UK have signalled that AI-enabled communications fall squarely within existing supervisory frameworks. As a result, firms will need to show they can capture and archive AI-generated content, supervise outputs with consistent rigour and maintain clear controls over how AI tools are introduced and used.

The final challenge shaping AI governance strategies is fragmentation. Most organisations already operate across multiple collaboration platforms, many of which now embed their own AI capabilities. Governing each tool in isolation creates blind spots, particularly as AI systems begin interacting across platforms. In 2026, effective governance will require a unified, cross-platform approach that applies consistent oversight to all AI-generated communications, regardless of where they originate.

Looking ahead, AI is reshaping not just how work gets done, but how risk must be managed. Organisations that lack visibility into AI interactions are leaving material gaps in their control frameworks. Those that invest now in purpose-built governance for aiComms will be better positioned to unlock AI’s benefits with confidence and meet the regulatory expectations that continue to evolve.

Read the daily RegTech news

Copyright © 2026 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.