Speaker: Sheldon Mills, FCA
Event: Supercharged Sandbox Showcase, FCA
Delivered: 28 January 2026
Key points
- Sheldon is leading a long-term review into AI and retail financial services, reporting to the FCA Board in the summer with recommendations to help the FCA continue to play a leading role in shaping AI-enabled financial services.
- AI is already shaping financial services, but its longer-term effects may be more far-reaching. This review will consider how emerging uses of AI could influence consumers, markets and firms, looking towards 2030 and beyond.
- This review does not change our regulatory approach. We remain outcomes-based and technology-neutral, ensuring greater flexibility for us and firms to adapt to technological change and market developments.
- We are asking for views on the opportunities and risks as AI becomes more capable, how AI could reshape competition and the customer relationship and how existing regulatory frameworks may need to adapt. The deadline is 24 February.
Long-term review into AI
Before we begin, take a look around this room. This is the Supercharged Sandbox. 23 firms at the frontier of retail financial services, chosen from 132 applications. If anyone still doubts the pace of AI change in our sector, this room is the answer.
The Board has asked me to lead the long-term review into AI and retail financial services. I will report to the FCA Board in the summer, setting out recommendations to help the FCA continue to play a leading role in shaping AI-enabled financial services. This will culminate in an external publication to support informed debate.
Many of you know me from my work on competition and the Consumer Duty.
Those seven years taught me something simple but crucial: the real challenge in regulation isn’t dealing with what we already understand – it’s preparing for what we don’t.
And that’s exactly what this review is about. Designing for the unknown.
Outcomes-based and technology-neutral
Let me make one thing absolutely clear from the start. This review does not change our regulatory approach. We remain outcomes-based and technology-neutral. We are not unveiling new rules, nor are we prescribing how AI should be deployed today.
This approach gives us and firms greater flexibility to adapt to technological change and market developments, rather than setting out detailed and prescriptive rules. We believe that with a fast-moving technology like AI, this is the best way of supporting UK growth and competitiveness, while protecting consumers and market integrity.
What we are doing is looking ahead – deliberately, collaboratively and with open eyes - to understand how AI could reshape consumers’ lives, how markets might reorganise, and how regulation can stay effective in a world moving faster than any of us have known. And how we strike the right balance between risk or safety and growth and innovation.
AI is pushing us into territory that nobody, anywhere, has fully mapped. No regulator has a complete picture. No firm does either. But we can do something far more important: we can design systems that adapt even when the path ahead isn’t fully visible.
Why we need to design for the unknown – and why now
AI has been used in financial services for a long time. Fraud models, trading systems, credit decisioning – nothing new. Even back in 2024, the Bank of England found that three quarters of firms were already using artificial intelligence. But the last two years have been different. Generative AI. Multimodal systems. Emerging AI agents.
Millions of people in the UK now use AI tools to interpret information, plan their lives and make decisions.
My favourite current use of models is to take a photo of food in my fridge and get some quick recipes for supper. But we also know from a few surveys that financial services consumers are using AI to plan their financial lives. Lloyd’s 2025 survey found that one in three customers use AI weekly to manage their money.
Many of you are already building tools – from personalising financial guidance, to reinventing customer journeys and better vulnerability identification.
So, we know firms will continue to invest in AI, and customers will increasingly use AI to access financial services. But we shouldn’t pretend we know how all of this plays out. We don’t yet know which models will scale. And we don’t know which risks will matter most – or which mitigations will actually work.
What we do know is that the UK has a choice: shape the future or inherit it. Designing for the unknown is how we choose leadership, not drift.
Exploring uncertainty through a considering a plausible scenario
Let’s consider a shift in what AI is capable of and what consumers and firms expect from it. The development of a 'proxy economy' in which, over time, consumers may increasingly use AI as an intelligent intermediary between themselves and firms.
Assistive AI is here today. Tools that explain products, compare options, prefill forms and highlight risks. They support consumers without taking decisions away from them.
Advisory AI is emerging. Systems that nudge, recommend and encourage action – switching suppliers, reshaping budgets, refinancing at better rates. These tools promise better outcomes, but they also raise questions about transparency, neutrality and the basis of advice.
Autonomous AI is coming into view. Agents that act within the boundaries of the consumer sets – shifting money, negotiating renewals, reallocating savings, or spotting risks before the consumer even sees them. For many households, this will be transformative. It reduces admin, improves decisions and cuts costs.
Let me make this concrete. Imagine Sarah, a working parent in 2030. Her AI agent manages household finances within agreed boundaries by moving money to higher-rate savings, flagging uncompetitive insurance renewals, even switching current accounts on her behalf.
For Sarah, this is transformative. She spends less time on admin, pays less for comparable products, and makes fewer costly mistakes.
But agent autonomy brings deeper questions.
- What happens when an AI agent makes a mistake?
- How do we ensure consumers understand enough to stay in control?
- And what happens if commercial incentives quietly shape the recommendations people see?
These are the questions we must ask before agent autonomy becomes normal – because once consumer behaviour shifts, it shifts fast. That is why this review matters.
Consumer outcomes matter
Many of you are already exploring how AI can support better outcomes with more accessible guidance, adaptive tools for those who struggle with financial confidence, and proactive identification of vulnerability. I’m excited by these opportunities.
We want to understand how firms can unlock these opportunities safely. And so designing for the unknown means looking squarely at the risks.
Consumers may delegate decisions they don’t understand. People with patchy data histories may face new exclusions. Scammers may exploit AI to mimic voices, create synthetic identities or manipulate communications at scale. Even a year ago, Experian found that over a third of UK businesses reported being targeted by AI-related fraud, and the capabilities of fraudsters will only continue to grow. Firms will have to combat this with technological advances themselves.
There are also risks that are less visible day-to-day, but just as important. AI can embed or amplify bias, leading to systematically worse outcomes for some groups. It can be hard to explain to a consumer – or to ourselves – why a particular decision was made, especially where models rely on complex data and proxies.
And when decisions are powered by ever more data, firms must get transparency and data protection right: using data lawfully, minimising it, securing it, and making sure customers understand what is happening and what choices they have.
And autonomous systems could make decisions that are technically logical but misaligned with a consumer’s real-world needs – because they are optimising for proxies rather than outcomes.
We want your insight on what you are seeing now – and what you suspect is coming next. The biggest failures are often born from what wasn’t anticipated.
Competition, market structure and new entrants
AI could change the drivers of market power in ways we need to understand early.
AI could be the great leveller. Perhaps giving a start-up the analytical power of a global bank. Or it could entrench the biggest players, the ones with the most data and the deepest pockets.
Big Tech firms may capture parts of the value chain without ever becoming regulated providers. Or consumers themselves, through their personal AI agents, may drive much more rapid switching, reshaping who holds power in ways we've not seen before.
These dynamics could make markets more open - or more concentrated. They could enhance competition – or reconfigure it entirely.
We're not taking a view. We do not know which future will dominate. We're asking you to help us see what's coming.
Because designing for the unknown means building flexibility now – while the system is still malleable – not when the structure is set in stone.
What does all this mean for regulation?
Our frameworks were built for a world where systems updated occasionally, models behaved predictably and responsibility was clearly located within the firm. AI challenges all three of those assumptions.
Models now update continuously. Harms can scale in hours, not months. And responsibility sits across developers, data providers, model hosts and regulated firms.
Accountability under the Senior Managers and Certification Regime (SM&CR) still matters – but what does 'reasonable steps' look like when the model you rely on updates weekly, incorporates components you don’t directly control, or behaves differently as soon as new data arrives?
What will the Critical Third-Party regimeLink is external look like as AI firms continue to shape the landscape of financial services? And as firms continue to develop AI assurance platforms to monitor, audit, and evaluate AI systems, what should the role of the FCA be?
Our approach isn’t changing. We remain outcomes-based, technology-neutral and proportionate.
But how those principles apply in a world of fast-evolving systems is something we must explore now, not later.
We want to examine how AI will change the way we apply our rules and give you the clarity you need. Designing for the unknown means building a regulatory model that can evolve with the technology – without compromising clarity or trust.
And we won't do this alone. The FCA doesn't regulate AI as a whole, nor should we. I shall work with the Information Commissioner's OfficeLink is external (ICO), the Competition and Markets AuthorityLink is external (CMA), and international counterparts to ensure a coherent environment for firms innovating in the UK.
What we need from you
This review will only be as strong as the evidence and insight we gather from those who are closest to the front lines of AI adoption – that is, from you.
We are asking for your views on the opportunities and risks you see as AI becomes more capable, how AI could reshape competition and the customer relationship and how existing regulatory frameworks may need to adapt.
Let me close with this. The decisions we make in the next few years will shape retail financial services for a generation. The UK has built a sector that is trusted, innovative, and globally competitive. AI doesn't change that ambition, but it changes the landscape.
This review is about building a shared understanding, so that we can design for this future landscape together.
I'm asking you directly: contribute. Challenge our assumptions. Tell us what we're missing. The deadline is 24 February 2026. The answers won't come from me alone, they'll come from conversations like the ones we have today.
Any other contributions can be sent to us at TheMillsReview@fca.org.uk.