The World Federation of Exchanges (WFE), the global industry association for exchanges and central clearing counterparties, has called for enhancements to legislative, regulatory, and supervisory frameworks applicable to AI in financial services.
The WFE, which today published its response to the US Treasury’s consultation on the uses of AI in the financial services sector, said there are valid concerns about the uncertainties surrounding the evolving landscape of AI technologies, which require a close look at regulation in order to protect investors and other market participants. However, the WFE recommended the Treasury seek an appropriate balance between innovation and protection to ensure that the regulatory framework isn’t too broad, too complex and that there is cohesion and alignment amongst regulators and international standard setters.
If the US framework introduced fails to meet this, the benefits that AI brings to economic growth, productivity, automation and innovation will be at risk.
On behalf of the exchange and clearing industry, representing the providers of over 250 market infrastructures, that see more than $124tr in trading pass through them annually (at end-2023), the WFE advises the Treasury that:
- The definition of AI should be precisely tailored to avoid including more than what is necessary. A broad definition would create onerous restrictions and not be proportionate to the risks that different tools have.
- A definition of AI should focus on computer systems with the ability to make decisions or predictions based on automated, statistical learning.
- AI deployment by malicious actors is an emerging type of risk associated with this technology which financial services firms are well aware of and are tackling.
- Whilst traditional risk management techniques can be used to manage risk of AI systems, more work needs to be done to develop AI specific risk management tools.
- Third parties will be valuable to help develop AI tools and risk management tools, but Treasury is right to be cognisant of the risks around big tech firms utilising their market dominance.
- Regulatory uncertainty is a key concern amongst our members. Regulators should focus on outcomes and use sound judgment, fostering collaboration to support innovation and competitiveness in financial markets.
- Our members favour a principles and risk-based approach to developing a regulatory framework, where requirements are proportional to the level of risk associated with AI applications. This needs alignment among the various financial regulators and must be compliant with international standards.
- Ultimately, government policy should encourage modernisation by promoting the use of cutting-edge technologies like AI, cloud computing, and machine learning in capital markets. This enhances market dynamics, and provides better services to consumers.
Nandini Sukumar, Chief Executive Officer, at the WFE commented, “AI regulation must enhance protection whilst avoiding the curtailment of progress and modernisation. The definition of AI in the President’s Executive Order is overly broad and could create unnecessary complexity by imposing extensive compliance obligations if implemented for financial services. Policy should establish the appropriate safeguards and supervision, but it must also encourage innovation and promote the use of cutting-edge technologies, like AI. It’s through this that we can drive efficiency, enhance market dynamics and provide better services to consumers.”
Richard Metcalfe, Head of Regulatory Affairs at the WFE commented, “While these technological innovations and the associated concerns about managing generative AI are significant, it is important to remember that, as trusted third parties providing secure and regulated platforms for trading securities, our members are already carefully scrutinising tools and establishing controls to govern AI use. The US Treasury should therefore take care to design an AI regulatory framework which is principles based, to maintain flexibility and encourage innovation. We also need to have an incremental approach to AI regulation, allowing for gradual adjustments and learning, ensuring that regulations do not hinder technological progress.”
The full response and policy recommendations can be found here.