By Agha Saeed Khan | BT&D Insights | March 2026
Something significant is happening in AI, and the pace of change is no longer giving organizations the luxury of a wait-and-see posture. Over the past eighteen months, AI capabilities have advanced from useful productivity aids to systems that can autonomously complete complex, expert-level tasks end to end. For boards, CEOs, and executive committees, this is no longer primarily a technology question. It is a strategic, governance, and risk question, and in most organizations, it is not yet being treated with the urgency it deserves.
The Pace of Change Is Not What Most Leaders Think It Is
The common mental model of AI progress is linear: each year a modest improvement on the last. That model is no longer accurate. Progress has been compounding, and it accelerated sharply through 2025 and into 2026. Independent research tracking AI task performance against human benchmarks shows that the length of autonomous tasks AI can handle has been doubling roughly every seven months. New models arriving now can reason through complex problems, manage multi-step workflows independently, and produce output that routinely matches or exceeds what experienced professionals deliver.
If your organization’s AI strategy was formed more than twelve months ago and has not been revisited, it was written for a different era. The question is no longer whether to engage with AI. It is how quickly you can build the governance, capability, and risk frameworks to do so responsibly and competitively.
What Leading Organizations Are Already Doing
Organizations that have moved beyond piloting into structured AI deployment share several characteristics. Boards in these organizations treat AI governance as a board-level matter, not an IT subcommittee issue. They have assigned clear accountability for what AI is in production, what data it uses, and what happens when it fails. Their AI strategies are built around two or three high-return use cases with measurable business outcomes, not dozens of disconnected pilots. And they have treated data governance as a prerequisite rather than an afterthought, because the quality of AI output is bounded by the quality of the data it runs on.
Critically, the organizations generating the most value have designed human oversight into their workflows from the start, proportionate to the risk of the decisions being supported. High-stakes decisions retain meaningful human review. Lower-risk, well-validated processes move toward automation. The key word is designed: effective oversight is not a sign-off step added at the end. It requires understanding where AI systems are likely to fail and building in the right check at the right point in the process.
The Risks That Are Not Being Managed Well Enough
For every organization handling AI governance well, several are not. The most common gaps are these. AI models in production behave differently across different inputs, drift as conditions change, and can fail in ways that are not immediately obvious. Most organizations outside financial services, and many banks with newer AI deployments, do not yet have model risk frameworks adequate to monitor and manage this. Third-party vendor agreements for AI tools frequently leave accountability for errors, data use, and regulatory compliance poorly defined. And AI deployment treated as a technology project rather than a change management initiative consistently underdelivers on adoption and benefit realization.
The regulatory environment is also evolving faster than most compliance functions are tracking. AI-specific requirements from financial regulators, data protection authorities, and sector supervisors are accumulating across jurisdictions. Organizations operating across borders need a mapped view of their AI deployments against these requirements, and most do not yet have one.
Banks: Specific Considerations
Banks face the AI transition with a specific combination of pressure and constraint. The competitive and cost case for deploying AI is strong, but the regulatory environment, model risk obligations, and data sensitivity create governance requirements that are more demanding than in most other sectors. The institutions moving fastest have concentrated on fraud and AML detection, credit risk modelling, customer service automation, and operational workflows in regulatory reporting and reconciliation. In trade finance and trade-based money laundering, which is a domain of particular relevance in several markets, AI is beginning to identify anomalies in documentation and transaction patterns that manual review consistently misses.
Even in the more sophisticated banks, governance gaps remain. Model risk frameworks designed for statistical models are not yet equipped to handle the additional complexity of large language models and generative AI. Board reporting on AI risk is frequently focused on project status rather than the risk profile of systems already in production. And the data privacy obligations associated with using customer data for AI development are still being worked through in most institutions. Banks in emerging markets that self-impose rigorous governance standards, rather than waiting for regulatory requirements, will be better positioned both competitively and in their relationships with regulators as requirements evolve.
What Leadership Should Do Now
- Conduct an honest inventory of where AI is already being used. Many organizations will surface deployments than leadership is aware of.
- Assign clear accountabilities for AI governance: the risk register, vendor management, model monitoring, and regulatory compliance mapping.
- Review board reporting on AI. If the board is not receiving regular, substantive reporting on the risk profile of AI systems in production, that reporting needs to be built.
- Invest in executive and board AI literacy. Governance is only as good as the judgment of the people exercising it.
- Engage with regulators proactively rather than waiting for requirements to arrive.
The organizations that will navigate this transition well are not necessarily the ones that move fastest. They are the ones that move with intention: clear about where AI adds value, honest about the risks, and disciplined about the governance needed to manage those risks at scale. Engagement without governance is not a strategy. It is exposure.
At BT&D, we work with financial institutions and organizations in emerging markets on governance, risk, and digital transformation, helping leadership navigate the AI transition with both ambition and discipline.
Agha Saeed Khan is the CEO of Business Transformation and Development LLC-FZ (BT&D).
