Why AI governance is now a top security priority

AI

AI has moved decisively from experimentation into core business infrastructure, and security leaders are now grappling with the consequences.

Findings from the 2025–2026 Cyber 60 CISO Survey underline how deeply AI is embedded in modern organisations, with 46% of respondents saying AI is already critical to both business operations and security strategy, claims Theta Lake.

At the same time, 75% report experiencing or suspecting an AI-related security incident in the past year, signalling that AI risk has become operational rather than hypothetical.

Recent high-profile incidents illustrate how quickly artificial intelligence can expose organisations to legal, regulatory, and reputational harm. Employees at Samsung unintentionally leaked confidential source code by pasting it into ChatGPT. Air Canada was held legally responsible after its chatbot provided incorrect fare information.

An artificial intelligence coding assistant reportedly deleted a production database and attempted to conceal the error, while New York City’s MyCity chatbot issued guidance that could encourage unlawful behaviour. In another case, iTutorGroup settled claims that its AI-driven recruitment software discriminated against older applicants. Collectively, these examples highlight a growing reality: organisations are accountable for the outputs and behaviour of their AI systems, regardless of whether errors originate from humans or machines.

As generative artificial intelligence increasingly shapes how information is created, summarised, and acted upon, new forms of exposure are emerging. Risks such as prompt injection, jailbreak behaviour, and model manipulation operate at a speed and scale that traditional manual or rules-based controls struggle to manage.

This has accelerated the need for supervisory and review artificial intelligence capable of monitoring AI-influenced communications in real time, particularly as AI becomes embedded across email, chat, meetings, and document workflows.

One of the clearest findings from the survey is that artificial intelligence now represents both a growing risk surface and an indispensable capability. Security leaders are being forced to reassess long-standing assumptions around visibility and control, often finding that legacy security models were not designed for environments where AI actively shapes meaning and decisions.

The report distinguishes between two fundamentally different AI risk surfaces. The first is external artificial intelligence threats, where attackers use AI to enhance phishing, fraud, impersonation, or malware development. These threats, while serious, largely extend familiar cybersecurity challenges and are typically addressed through established defences such as email security, endpoint protection, and identity controls.

The second, and significantly larger, risk surface lies in internal AI usage. Employees increasingly rely on sanctioned tools such as copilots and meeting assistants, alongside unsanctioned “shadow AI” tools. This creates exposure through sensitive data being entered into prompts, AI-generated outputs that are inaccurate or biased, autonomous actions taken without oversight, and a lack of auditability when regulators ask how decisions were influenced. In these cases, liability sits squarely with the organisation.

Internal artificial intelligence usage scales rapidly. Every employee effectively becomes an AI operator, every prompt and response a potential compliance record. Blocking AI outright often drives usage underground, reducing visibility rather than risk. As a result, governance through enablement is emerging as the preferred approach, allowing approved AI tools to be used while maintaining oversight, capture, and contextual understanding.

Visibility has therefore become foundational. AI-generated content rarely remains confined to a single channel; a draft may be created by AI, refined in chat, discussed in a meeting, and finalised by email. Without capturing prompts, outputs, and how they evolve across conversations, organisations cannot explain how decisions were shaped or defend them when challenged.

This shift in thinking is reflected in CISO priorities. Rather than focusing solely on expanding artificial intelligence capabilities, 55% plan to evaluate AI model access governance tools, while 54% are considering secure inference platforms. Governance is increasingly defined by observability and accountability, not restriction.

Vendor strategy is also under greater scrutiny. According to the survey, 82% of respondents say a vendor’s AI approach is very or critically important. Buyers are assessing whether AI is trained responsibly, behaves predictably, and produces outputs that can be reviewed and explained within regulated workflows. Transparency, human-in-the-loop controls, and documented training practices are becoming decisive purchasing criteria.

Operational risks are also evolving. Prompt injection and jailbreak attempts were reported by 41% of organisations, while the same proportion flagged shadow AI usage. These challenges are less about user misconduct and more about gaps in oversight. When AI operates outside observable systems, organisations lose the context needed to understand intent and risk.

Finally, AI risk surface assessments are gaining momentum, with 94% of organisations having conducted or planning to conduct one. These assessments focus less on the models themselves and more on where artificial intelligence intersects with communication, decision-making, sensitive data, and regulatory obligations.

Taken together, the findings point to a clear shift. Organisations are moving beyond experimentation towards governing how AI shapes communication, judgement, and accountability. In this environment, artificial intelligence must be observable and reviewable within everyday communication systems. The ability to understand how AI participates in decision-making may ultimately determine whether it delivers lasting value or introduces unseen risk.

Read the daily RegTech news

Copyright © 2026 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.