Mondo Visione Worldwide Financial Markets Intelligence

FTSE Mondo Visione Exchanges Index:

Our Emerging Regulatory Approach To Big Tech And Artificial Intelligence, Speech By Nikhil Rathi, UK Financial Conduct Authority Chief Executive, Delivered At The Economist, London

Date 12/07/2023

Speaker: Nikhil Rathi, Chief Executive
Location: Economist Impact, Finance transformed: exploring the intersection of finance 2.0 and web3, London
Delivered: 12 July 2023
Note: this is the speech as drafted and may differ from the delivered version

 

Highlights

  • We welcome the government’s call for the UK to be the global hub of AI regulation and will open our AI sandbox to firms wanting to test the latest innovations.
  • Big Tech’s role as the gatekeepers of data in financial services will be under increased scrutiny.
  • Our outcomes and principles-based approach to the regulation, including the Senior Managers Regime and Consumer Duty, should mean firms have scope to innovate while protecting consumers and market integrity. We will only intervene with new rules or guidance where necessary. 
  • We will regulate firms that are designated as Critical Third Parties where they underpin financial services and can impact stability and confidence in our markets.

 


Introduction: AI solutions for human problems

Depending on who you speak to, AI could either lead to the destruction of civilisation , or the cure for cancer  or both. 

It could either displace today’s jobs or enable an explosion in future productivity.

The truth probably embraces both scenarios. At the FCA we are determined that, with the right guardrails in place, AI can offer opportunity.

The Prime Minister said he wants to make the UK the home of global AI safety regulation.

We stand ready to make this a reality for financial services, having been a key thought leader on the topic, including most recently hosting 97 global regulators to discuss regulatory use of data and AI.

Big Tech and their gatekeeping of financial data

Today, we published our feedback statement on Big Tech in Financial Services.  

We have announced a call for further input on the role of Big Tech firms as gatekeepers of data and the implications of the ensuing data-sharing asymmetry between Big Tech firms and financial services firms.

We are also considering the risks that Big Tech may pose to operational resilience in payments, retail services and financial infrastructure. And we are mindful of the risk that Big Tech could pose in manipulating consumer behavioural biases.

Partnerships with Big Tech can offer opportunities – particularly by increasing competition for customers and stimulating innovation – but we need to test further whether the entrenched power of Big Tech could also introduce significant risks to market functioning.

What does it mean for competition if Big Tech firms have access to unique and comprehensive data sets such as browsing data, biometrics and social media?

Coupled with anonymised financial transaction data, over time this could result in a longitudinal data set that could not be rivalled by that held by a financial services firm and it will be a data set that could cover many countries and demographics.

Separately, with so many financial services using Critical Third Parties – indeed, as of 2020, nearly two thirds of UK firms used the same few cloud service providers – we must be clear where responsibility lies when things go wrong. Principally this will be with the outsourcing firm, but we want to mitigate the potential systemic impact that could be triggered by a Critical Third Party.

Together with the Bank of England and PRA, we will therefore be regulating these Critical Third Parties - setting standards for their services – including AI services - to the UK financial sector. That also means making sure they meet those standards and ensuring resilience.

Ensuring market integrity in the age of AI

The use of AI can both benefit markets and can also cause imbalances and risks that affect the integrity, price discovery and transparency and fairness of markets if unleashed unfettered.

Misinformation fuelled by social media can impact price formation across global markets.

Generative AI can affect our markets in ways and at a scale not seen before – for example, on Monday, May 22 this year, a suspected AI generated image purporting to show the Pentagon  in the aftermath of an explosion spread across social media just as US markets opened.

It jolted global financial markets until US officials quickly clarified it was a hoax.

We have observed how intraday volatility has doubled and amplified compared to during the 2008 financial crisis.

This surge in intraday short-term trading across markets and asset classes suggests investors are increasingly turning to highly automated strategies.

Just last week, an online scam video used a deep fake, computer generated video of respected personal finance campaigner Martin Lewis to endorse an investment scheme.

There are other risks too, involving cyber fraud, cyber attacks and identity fraud increasing in scale and sophistication and effectiveness. This means that as AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate at the same time. We will take a robust line on this – full support for beneficial innovation alongside proportionate protections.

Another area that we are examining is explainability – or otherwise – of AI models.

To make a great cup of tea, do you just need to know to boil the kettle, and then pour the boiling water over the teabag (AFTER the milk of course, I am a Northerner) or do you need to understand why the molecules in the water move more quickly after you have imbued them with energy through the warmer temperature? And do you need to know the correct name for this process – a Brownian motion by the way – or do you just need to know that you have made a decent cup of tea?

Firms in most regulatory regimes are required to have adequate systems and controls. Many in the financial services industry themselves feel that they want to be able to explain their AI models – or prove that the machines behaved in the way they were instructed to – in order to protect their customers and their reputations – particularly in the event that things go wrong.

AI models such as ChatGPT can actually invent fake case studies  sometimes referred to as 'hallucination bias'. This was visible in a recent New York court case with case citations by one set of lawyers being based on fake case material.

There are also potential problems around data bias. AI model outcomes depend heavily on accuracy of data inputs. So what happens when the input data is wrong or is skewed and generates a bias?

Poor quality – or historically biased – data sets can have exponentially worse effects when coupled with AI which augments the bias. But what of human biases? It was not long ago that unmarried women were routinely turned down for mortgages. There are tales of bank managers rejecting customers’ loan applications if they dared to dress down for the meeting. 

Therefore can we really conclude that a human decision-maker is always more transparent and less biased than an AI model? Both need controls and checks.

Speculation abounds about large asset managers in the US edging towards unleashing AI based investment advisors for the mass market.

Some argue that autonomous investment funds can outperform human led funds.

The investment management industry is also facing considerable competitive and cost pressures, with a PwC survey this week citing one in six asset and wealth managers expecting to disappear  or be swallowed by a rival by 2027.  Some say they need to accelerate tech enablement to survive. But it is intriguing that one Chinese hedge fund  that was poised to use a completely automated investment model – effectively using AI as a fund manager – has recently dropped the idea, despite it apparently being able to outperform the market significantly.

Embracing the opportunities of AI

And what of the opportunities of AI? There are many.

In the UK, we had the lowest annual growth  in worker productivity in the first quarter this year for a decade.

There is optimism that AI can boost productivity and in April, a study by the National Bureau of Economic Research in the US found that productivity was boosted by 14% when over 5000 customer support agents used an AI conversational tool.

Many of the jobs our children will do have not yet been invented but will be created by technology.

And what of the other benefits of AI in financial services? Such as:

  • The ability to use Generative AI and synthetic data to help improve financial models and cut crime.
  • Tackling the advice gap with better, more accurate information delivered to everyday investors, not just the wealthiest customers who can afford bespoke advice. For example, I recently met with a start-up in Edinburgh which was carrying out controlled trials with generative AI tools and debt management to take the stigma away from customers and help them unlock viable solutions.
  • The ability to hyper-personalise products and services to people, for example in the insurance market, better meeting their needs.
  • The ability to tackle fraud and money laundering more quickly and accurately and at scale.

 

The FCA’s approach to AI

As a data-led regulator, we are training our staff to make sure they can maximise the benefits from AI.

We have invested in our tech horizon scanning and synthetic data capabilities, and this summer have established our Digital Sandbox to be the first of its kind used by any global regulator, using real transaction, social media, and other synthetic data to support Fintech and other innovations to develop safely.  

Internally, the FCA has developed its supervision technology. We are using AI methods for firm segmentation, the monitoring of portfolios and to identify risky behaviours.

Collaboration & innovation

If there is one thing, we know about AI it is that it transcends borders and needs a globally co-ordinated approach.

The FCA plays an influential role internationally both bilaterally and within global standard setting bodies and will be seeking to use those relationships to manage the risks and opportunities of innovations and AI.

The FCA is a founding member and convenor of the Global Financial Innovation Network , where over 80 international regulators collaborate and share approaches to complex emerging areas of regulation, including ESG, AI, and Crypto.

We are also one of four regulators that form the UK Digital Regulation Cooperation Forum, pooling insight and experience on issues  such as AI and algorithmic processing.

Separately, we are also hosting the global techsprint on the identification of Greenwashing in our Digital Sandbox, and we will be extending this global techsprint approach to include Artificial Intelligence risks and innovation opportunities.

Where does responsibility lie?

We still have questions to answer about where accountability should sit – with users, with the firms or with the AI developers? And we must have a debate about societal risk appetite.

What should be offered in terms of compensation or redress if customers lose out due to AI going wrong? Or should there be an acceptance for those who consent to new innovations that they will have to swallow a degree of risk?

Any regulation must be proportionate enough to foster beneficial innovation but robust enough to avoid a race to the bottom and a loss in trust and confidence, which when it happens can be deleterious for financial services and very hard to win back.

One way to strike the balance and make sure we maximise innovation but minimise risk is to work with us, through our upcoming AI Sandbox.

While the FCA does not regulate technology, we do regulate the effect on – and use of – tech in financial services.

We are already seeing AI-based business models coming through our Authorisations gateway both from new entrants and within the 50,000 firms we already regulate.

And with these developments, it is critical we do not lose sight of our duty to protect the most vulnerable and to safeguard financial inclusion and access.

Our outcomes-based approach not only serves to protect but also to encourage beneficial innovation.

Thanks to this outcomes-based approach, we already have frameworks in place to address many of the issues that come with AI.

The Consumer Duty, coming into force this month, stipulates that firms must design products and services that aim to secure good consumer outcomes. And they have to demonstrate how all parts of their supply chain – from sales to after sales and distribution and digital infrastructure – deliver these.

The Senior Managers & Certification Regime also gives us a clear framework to respond to innovations in AI. This makes clear that senior managers are ultimately accountable for the activities of the firm.

There have recently been suggestions  in Parliament that there should be a bespoke SMCR-type regime for the most senior individuals managing AI systems, individuals who may not typically have performed roles subject to regulatory scrutiny but who will now be increasingly central to firms’ decision-making and the safety of markets. This will be an important part of the future regulatory debate.

We will remain super vigilant on how firms mitigate cyber-risks and fraud given the likelihood that these will rise.

Our Big Tech feedback statement sets out our focus on the risks to competition.

We are open to innovation and testing the boundaries before deciding whether and what new regulations are needed. For example, we will work with regulatory partners such as the Information Commissioner’s Office to test consent models provided that the risks are properly explained and demonstrably understood.

We will link our approach to our new secondary objective to support economic growth and international competitiveness – as the PM has set out, adoption of AI could be key to the UK’s future competitiveness, nowhere more so than in financial services.

The UK is a leader in fintech – with London being in the top 3 centres in the world and number 1 in Europe.

We have world-class talent and are ensuring the development of further skills, with our world class universities.

We want to support inward investment with pro-innovation regulation and transparent engagement.

International and industry collaboration is key on this issue, and we stand ready to lead and help make the UK the global home of AI regulation and safety.