Agentic AI regulation in APAC: what banks must do

Agentic AI regulation in APAC: what banks must do

Agentic AI has quickly become the new headline topic in financial services, following last year’s surge of interest in generative AI and predictive models.

The promise is straightforward: autonomous agents that can plan tasks, call tools, retrieve data, and complete multi-step workflows with less human input. The problem is also straightforward: firms are adopting the technology faster than regulators are issuing rules for it, particularly across Asia-Pacific (APAC).

SymphonyAI, which offers AI-powered FinCrime prevention solutions, recently delved into how to navigate agentic AI regulations in APAC.

Despite the constant discussion, there is currently very little explicit private-sector regulation aimed specifically at agentic AI across major APAC markets. In countries including Australia, Singapore, New Zealand, and Malaysia, supervisors have not produced dedicated agentic AI guidance for financial institutions. A similar picture can be seen in other jurisdictions such as South Korea and Indonesia.

Instead, regulators are leaning on broad, principles-led expectations, effectively telling firms to manage agentic AI by extending the governance and risk frameworks they already use for models, operational risk, and accountability.

SymphonyAI focused on Australia, New Zealand, Singapore and Malaysia. Across these markets, the themes are consistent even if the documents differ: human-led accountability must remain in place, model risk management needs to cover AI and any autonomous components, and explainability and transparency should be available for auditors, regulators, customers, and the wider public.

Those principles are visible in national approaches that shape the wider regulatory mood.

Australia’s Guidance for AI Adoption brings together responsible AI practices but does not address agentic systems directly. Singapore’s Model AI Governance Framework, supplemented by a Generative AI update, has helped set expectations for how AI should be governed, including in financial services. New Zealand’s Algorithm Charter and AI Strategy 2025 focus on human accountability and strong data stewardship. Malaysia’s AI Governance and Ethics Guidelines set a national foundation while leaving sectoral regulators to translate it into operational direction.

None of these frameworks names agentic AI outright, largely because policy cycles move slowly and many committees have only recently published guidance on earlier waves of AI, SymphonyAI stated.

One of the clearest signals of where agentic governance may go next is at the state level in Australia. New South Wales has issued specific public-sector guidance for agentic AI and created an Office for Artificial Intelligence to operationalise responsible adoption.

While it applies to government agencies and is not mandatory, private-sector risk teams are already reviewing it as a practical blueprint for risk assessment, guardrails, transparency, and accountability. A key feature is the idea that each agent should have a named accountable owner, supported by IT and system owners where relevant, to keep responsibility clear rather than dispersed.

Singapore also shows that “regulation” is only part of the story. Even without agentic-specific rules, the country is becoming a proving ground for governed deployment. Microsoft’s Agentic AI Accelerator, launched with Digital Industry Singapore, is supporting the development of agentic applications under structured conditions. Bank of Singapore, part of OCBC Group, is already using agentic AI in KYC processes, with an assistant drafting Source of Wealth reports and cutting cycle times from days to hours, while emphasising controls, documented oversight, and accountability. Meanwhile, the Monetary Authority of Singapore (MAS) continues to expand AI and cyber risk expectations in ways that are relevant to agentic systems, particularly when agents chain multiple tools and data sources.

SymphonyAI is positioning its Sensa Risk Intelligence (SRI) platform around this shift, describing it as an AI-native compliance platform for end-to-end business process automation. The firm says SRI uses agentic AI to help organisations deploy agents that automate tasks across operations, enhance detection, and drive efficiency gains while supporting compliance controls.

For more insights, read the full story here.

Read the daily FinTech news
Copyright © 2026 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.