The release of the 2026 Annual Regulatory Oversight Report by the Financial Industry Regulatory Authority (FINRA) sets out the regulator’s examination and enforcement priorities for the year ahead.
According to Theta Lake, the report places particular emphasis on two areas that have rapidly moved to the top of compliance agendas: generative AI (GenAI) governance and the supervision of off-channel communications.
For broker-dealers, the message is clear. These are no longer emerging risks. They are live supervisory expectations requiring demonstrable controls, documentation and oversight.
A notable addition this year is a dedicated section on “GenAI: Continuing and Emerging Trends”. Its inclusion signals a decisive regulatory shift. FINRA makes it explicit that its rules apply to GenAI tools in the same way they apply to any other technology deployed within a firm.
In practice, this means that AI-enabled summarisation, conversational interfaces, content drafting and query tools must be subject to the same supervisory rigour as traditional systems.
The report identifies summarisation and information extraction as the most common GenAI use cases among member firms, followed by conversational AI and question-answering tools, content generation and drafting, and search-style querying.
While these applications can drive efficiency and productivity gains, they also introduce risks linked to accuracy, data leakage, model bias and inappropriate outputs. FINRA’s position suggests that firms must move beyond experimentation and embed structured governance across the AI lifecycle.
At an enterprise level, FINRA expects firms to establish supervisory processes tailored specifically to GenAI development and deployment. This goes beyond simply folding AI into existing IT governance. Instead, firms are encouraged to create cross-functional review structures bringing together compliance, legal, IT, cybersecurity, risk and business teams.
AI risk assessment frameworks such as ISO 42001 for AI management systems, alongside guidance from bodies like NIST and the Cloud Security Alliance, are increasingly seen as benchmarks for robust governance. ISO 42001, for example, promotes accountability structures, risk-based methodologies, data governance controls and continuous improvement processes — all elements that align closely with regulatory expectations.
Testing is another pillar of FINRA’s guidance. The regulator stresses the importance of robust pre-deployment testing to understand a model’s capabilities and limitations. Effective programmes should assess privacy protections, output integrity, operational reliability and factual accuracy. Documenting test methodologies and results creates a defensible audit trail and evidences reasonable supervision in the event of regulatory scrutiny.
Crucially, oversight cannot end at deployment. FINRA highlights the need for ongoing monitoring of prompts, responses and outputs to ensure GenAI tools continue to function as intended. Logging mechanisms, model version tracking and structured sampling by subject matter experts form part of a “human-in-the-loop” approach.
The ability to capture, retain and replay AI-generated content in context — particularly where outputs feed into email, chat or other client communications — is fast becoming a baseline expectation.
Alongside AI governance, FINRA reiterates its focus on electronic communications supervision. Off-channel communications remain a persistent enforcement theme. The regulator encourages firms to analyse message volume patterns across approved channels and to investigate unexplained declines in activity that could signal migration to unauthorised platforms.
Behavioural surveillance, anomaly detection and contextual conversation analysis are also highlighted as tools to detect parallel discussions taking place outside official systems.
The overarching message of the 2026 report is that both GenAI governance and off-channel communication controls must be dynamic. Static policies will not suffice in a rapidly evolving digital environment. Firms that proactively strengthen enterprise AI oversight and modernise surveillance capabilities are likely to be better positioned during examinations, while those relying on legacy controls may face heightened regulatory risk.
Copyright © 2026 RegTech Analyst
Copyright © 2018 RegTech Analyst





