Mondo Visione Worldwide Financial Markets Intelligence

FTSE Mondo Visione Exchanges Index:

Remarks Of Commissioner Kristin N. Johnson At George Washington University - Artificial Intelligence In Financial Markets: Enhancing Compliance, Supervision, And Enforcement

Date 08/07/2025

Thank you to the George Washington University Regulatory Studies Center, Roger Nober, Susan Dudley, and the organizers of today’s event for allowing me to join virtually. As many of you are aware, I have spent the last several years engaging regulators and market participants from jurisdictions around the world on issues at the core of today’s discussion.[1]

How might advances in artificial intelligence (AI) increase inclusion and customer experiences and democratize access to financial services, improve the accuracy and efficiency of financial services, and potentially reduce transaction costs as well as the costs of compliance? 

These issues, among several other potential benefits and risks associated with the adoption of innovative technologies, are top of mind for me and many other senior regulators, chief executive officers, chief technology officers, chief information security officers, chief compliance officers, and chief risk managers around the world.

According to an International Monetary Fund paper exploring the benefits and risks of AI in finance, AI and machine learning (ML) technologies alongside other

[r]ecent technological advances in computing and data storage power, big data, and the digital economy are facilitating rapid AI/ML deployment in a wide range of sectors, including finance. The COVID-19 crisis has accelerated the adoption of these systems due to the increased use of digital channels.

AI/ML systems are changing the financial sector landscape. Competitive pressures are fueling rapid adoption of AI/ML in the financial sector by facilitating gains in efficiency and cost savings, reshaping client interfaces, enhancing forecasting accuracy, and improving risk management and compliance. AI/ML systems also offer the potential to strengthen prudential oversight and to equip [regulators]  with new tools. . . .[2]

Indisputably, AI is rapidly transforming the financial sector, particularly in the areas of compliance, market surveillance, and regulatory enforcement. What once seemed the creative imaginings of science fiction or fantasy novels and films—forward-looking notions of a futuristic world—has now become a practical and increasingly essential tool across the financial market ecosystem. Market participants and regulators alike are leveraging AI and ML to improve risk management, detect misconduct, and strengthen the integrity of the markets.

Let’s explore the use of AI in compliance, bad actors’ potential misuse of AI, opportunities for supervisory technology (suptech) in enforcement, and a path forward.

AI and Industry Compliance

Financial institutions have been at the forefront of AI adoption, especially in compliance functions. AI is widely used in anti-money laundering (AML) efforts, where algorithms analyze transaction patterns across millions of credit card statements, bank statements, and account details to detect anomalies that may go unnoticed by traditional systems. ML models have dramatically reduced false positives in AML alerts[3]; this has long been a challenge for compliance teams who may now rely on AI to learn by reviewing training data and distinguish between benign and suspicious activity more precisely and more efficiently.

AI also supports compliance with complex cross-border financial regulations. Financial services firms deploy ML to monitor transactions for potential sanctions violations, helping ensure that transactions align with regulatory requirements based on origin, amount, frequency, and other risk factors.[4]

Some firms have also embraced AI in communications surveillance, using platforms that offer digital communications governance to review internal communications for signs of fraud or misconduct. By automating these reviews, firms are better equipped to identify red flags early and maintain robust compliance programs.

A recent Government Accountability Office (GAO) report released in May of 2025—Artificial Intelligence: Use and Oversight in Financial Services—identifies six increasingly common activities for which financial services firms may choose to integrate AI models, including automated trading, countering threats and illicit finance, credit decisions, customer service, investment decisions, and risk management.[5]

The GAO report indicated that AI may be used to “detect and mitigate cyber threats through real-time investigation of potential attacks, flagging and blocking of new ransomware, and identification of compromised accounts and files” as well as to “identify fake IDs, recognize different photos of the same person, and screen clients against sanctions and other lists; analyze transaction data … and unstructured data (such as email, text, and audio data) to detect evidence of possible money laundering, terrorist financing, bribery, tax evasion, insider trading, market manipulation, and other fraudulent or illegal activities.”[6]

For many of these use cases, financial services firms rely on generative AI. However, for use cases that require a high degree of reliability or explainability—the ability to understand how and why an AI system produces decisions, predictions, or recommendations—firms are rightly reticent to employ generative AI models.

Regulators Use of AI for SupTech 

The benefits of AI are not limited to the private sector. U.S. regulatory agencies—including the Commodity Futures Trading Commission (CFTC), the Board of Governors of the Federal Reserve System (Federal Reserve), the Federal Deposit Insurance Corporation (FDIC), the Securities and Exchange Commission (SEC), and the National Credit Union Administration (NCUA)—have begun integrating AI tools into their supervisory functions.

These agencies use AI to analyze vast quantities of financial data, identify outliers, and detect emerging risks.[7] For example, AI can flag inconsistencies in data submissions from financial institutions, or surface patterns that indicate potential regulatory violations. This use of AI, often referred to as “suptech” (supervisory technology), enhances regulators’ ability to carry out their oversight responsibilities efficiently and proactively.

Over the course of last year, the CFTC undertook extraordinary efforts to begin to clarify the Commission’s understanding of registrants’ use of AI and the potential benefits and limitations of the Commission’s implementation of AI for supervisory, surveillance, and enforcement purposes. In January of 2024, I worked with Commission staff to issue a Request for Comment distributed to our market participants to better understand the real-time adoption of AI models.[8] Following the Request for Comment, in December of 2024, the Commission issued a staff advisory on Use of Artificial Intelligence in CFTC-Regulated Markets.[9] One of the most significant takeaways from the staff advisory, which was echoed in executive orders issued by the prior administration, underscore the obligation for CFTC-regulated entities to maintain compliance with applicable statutory and regulatory requirements whether they choose to deploy AI or any other technology.

Addressing the Dark Side of AI

While AI has the potential to enhance compliance and supervision, it also introduces new risks. Alongside the promise of AI, we must consider the limitations and potential perils of implementing AI quickly without appropriate guardrails. Many of you in the room today, former Commissioner Berkovitz and Professor Cary Coglianese, among others, have participated in joint studies published by the Administrative Conference of the United States (ACUS) or independently published or presented on these limits. 

In previous speeches, I have outlined concerns regarding the implementation of AI models without effective guardrails and governance interventions. 

In a speech earlier this summer, I began to explore the specific concerns that may emerge as firms and regulators integrate agentic AI.[10] The discussion today, in fact, may largely focus on the integration of agentic AI models in compliance, surveillance, and enforcement. If so, I am hopeful that, in parallel to efforts to explore the benefits, panelists examining “AI’s Role in Regulation Post-Chevron” and “Regulatory Functions Most Amenable to AI-Drive Process Improvement” will also examine important concerns such as the limits of synthetic data, ghosts or hallucinations, data leakage, increasingly undetectable video and voice deepfakes, data accuracy, data security, and data integrity, among others.

Some bad actors are paving the road for regulators and enforcement actions using AI technology. . But, in many cases, the bad actions are simply traditional, garden variety fraud with an AI white-label. 

“AI washing”—the practice of exaggerating or misrepresenting AI capabilities to attract investors or customers[11]—is among the most concerning marketing and solicitation issues that financial market regulators currently face. Firms may claim to use advanced AI models to generate high returns when, in reality, they rely on rudimentary trading bots or nonexistent systems.[12]

Enforcement in Action

The CFTC has actively pursued enforcement actions against fraudulent actors who misuse or misrepresent AI. In a landmark case, the Commission obtained a $1.7 billion penalty—its largest ever—against a South African company that defrauded investors through a fraudulent multilevel marketing scheme.[13] The company falsely claimed to use a proprietary AI trading bot to generate high returns on Bitcoin investments. In reality, there was no proprietary trading bot and the firm engaged in minimal trading activity, most of which was unprofitable, and misappropriated investor funds.

This and other cases underscore the CFTC’s ability to tackle AI-related misconduct using existing legal tools. The Commodity Exchange Act (CEA) provides a robust and flexible framework that prohibits fraudulent and manipulative practices regardless of the underlying technology. For example, CEA Section 4c(a) outlaws disruptive practices such as spoofing,[14] while CEA Section 6(c)(1) and Regulation 180.1 give the Commission broad anti-fraud and anti-manipulation authority.[15] These provisions are intentionally technology-neutral, allowing the CFTC to remain agile as new innovations emerge.

The Commission has demonstrated, through its prior enforcement actions, that markets and market participants engaged in activities that are regulated by the Commission are expected to comply with applicable statutory and regulatory requirements, even when such activities occur with cryptocurrencies or through the use of AI. The technology-neutral approach of the CEA and CFTC regulations allows these provisions to be used to combat fraud in any shape, manner, or form.

The Strategic Importance of Suptech

A recent survey by the Financial Stability Institute (FSI) and the Bank for International Settlements Innovation Hub found that only 3 out of 50 supervisory authorities surveyed did not have ongoing suptech initiatives.[16] Those with a comprehensive suptech strategy were significantly more likely to deploy tools critical to supervision.[17]

This underscores the importance of not only embracing AI on a case-by-case basis, but also developing cohesive strategies for integrating AI into regulatory and supervisory workflows. By investing in data infrastructure, fostering inter-agency collaboration, and recruiting AI-savvy talent, regulators can better equip themselves to meet the demands of increasingly complex markets.

Finding a Pathway Forward

I am looking forward to exploring the following principles and their role in our principles-based regulatory framework that I outlined in a speech last year. [18] As I have previously explained, there are many things that the Commission can do immediately to enhance our understanding of AI and help guide the development of effective guardrails that foster responsible development of AI.[19]

Heightened Penalties

As a CFTC Commissioner, I am also deeply concerned about the potential for abuse of AI technologies to facilitate fraud in our markets. As we examine the development of and limitations on the legitimate uses of AI in our markets, it is also important for the CFTC to emphasize that any misuse of these technologies will draw sharp penalties.

In fact, I continue to call for the Commission to consider introducing heightened penalties for those who intentionally use AI technologies to engage in fraud, market manipulation, or the evasion of our regulations.

In many instances, our statutes provide for heightened civil monetary penalties where appropriate.

I propose that the use of AI in our markets to commit fraud and other violations of our regulations may, in certain circumstances, warrant a heightened civil monetary penalty.

Bad actors who would use AI to violate our rules must be put on notice and sufficiently deterred from using AI as a weapon to engage in fraud, market manipulation, or to otherwise disrupt the operations or integrity of our markets. We must make it clear that the lure of using AI to engage in new malicious schemes will not be worth the cost.

Recommendation for an Inter-Agency Task Force

At the end of 2023, the previous administration announced the creation of an AI Safety Institute, which was to be established within the National institute of Standards and Technology (NIST), housed within the Commerce Department.[20]

Shortly thereafter, I proposed the creation of an inter-agency task force composed of financial regulators including the CFTC, SEC, Federal Reserve, Office of the Comptroller of the Currency, Consumer Financial Protection Bureau, FDIC, Federal Housing Finance Agency, and NCUA to develop guidelines, tools, benchmarks, and best practices for the use and regulation of AI in the financial services industry.[21]

Addressing the perils of AI, while harnessing its promise, is a challenge that will require a whole-of-government approach, with regulators working together across diverse agencies. I continue to advocate for agencies working together to provide their essential experience and expertise to help guide the development of AI standards for the financial industry.

Conclusion

The CFTC, in particular, is well positioned to lead in this space. Its principles-based and technology-neutral approach to regulation allows for flexible oversight that supports innovation while safeguarding market integrity. The Commission's mission—to foster open, transparent, competitive, and financially sound markets—naturally aligns with the adoption of cutting-edge technology.

AI is no longer a futuristic concept—it is a central feature of modern financial markets. Used responsibly, AI enhances compliance, improves oversight, and enables faster and more effective enforcement. The CFTC’s technology-neutral framework allows it to keep pace with innovation while maintaining essential investor protections and market integrity.

Thanks again for allowing me to share my thoughts with you today. I anticipate you will have an energetic, generative, and thoughtful discussion on the panels and following the presentations this afternoon.


[1] The views I share today are my own and not the views of the Commission, my fellow Commissioners or the CFTC staff.

[2] International Monetary Fund, Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance (Oct. 22, 2021), https://www.imf.org/en/Publications/Departmental-Papers-Policy-Papers/Issues/2021/10/21/Powering-the-Digital-Economy-Opportunities-and-Risks-of-Artificial-Intelligence-in-Finance-494717.

[3] Hariharan Pappil Kothandapani, Automating financial compliance with AI: A New Era in regulatory technology (RegTech), Int’l J. of Sci. and Rsch. Archive 2024, 11(01), 2651 (2024), https://tinyurl.com/4r42tdxw.

[4] Id.

[5] U.S. GAO, Report to Congressional Committees, Artificial Intelligence: Use and Oversight in Financial Services at 8 (May 2025), https://www.gao.gov/assets/gao-25-107197.pdf.

[6] Id. at 8.

[7] Id. at 33, 35.

[8] U.S. CFTC, Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets (Jan. 25, 2024), https://www.cftc.gov/PressRoom/PressReleases/8853-24.

[9] CFTC Staff Issues Advisory Related to the Use of Artificial Intelligence by CFTC-Registered Entities and Registrants (Dec. 5, 2024), https://www.cftc.gov/PressRoom/PressReleases/9013-24.

[10] Keynote Remarks of Commissioner Kristin N. Johnson at RegHub Summit London 2025: The Future of Finance: Enabling AI Tools To Enhance Compliance and Surveillance (June 18, 2025), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson20.

[11] Jonathan Uslaner & Alec Coquin, ‘AI Washing’: regulatory and private actions to stop overstating claims, Reuters (May 30, 2025); https://www.reuters.com/legal/legalindustry/ai-washing-regulatory-private-actions-stop-overstating-claims-2025-05-30/.

[12] See Customer Advisory: AI Won’t Turn Trading Bots into Money Machines (Jan. 26, 2024), https://www.cftc.gov/LearnAndProtect/AdvisoriesAndArticles/AITradingBots.html.

[13] CFTC Charges South African Pool Operator and CEO with $1.7 Billion Fraud Involving Bitcoin (June 30, 2022), https://www.cftc.gov/PressRoom/PressReleases/8549-22; Federal Court Orders South African CEO to Pay Over $3.4 Billion for Forex Fraud (Apr. 27, 2023), https://www.cftc.gov/PressRoom/PressReleases/8696-23.

[14] 7 U.S.C. § 6c(a).

[15] 7 U.S.C. § 9(1); 17 C.F.R. § 180.1.

[16] Jermy Prenio, Peering through the hype—assessing suptech tools’ transition from experimentation to supervision, Financial Stability Institute at 5 (June 2024), https://www.bis.org/fsi/publ/insights58.pdf.

[17] Id. at 2.

[18] Speech of Commissioner Kristin Johnson: Building A Regulatory Framework for AI in Financial Markets (Feb. 23, 2024), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson10.

[19] Id. 

[20] Press Release, FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence, White House (Nov. 1, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/11/01/fact-sheet-vice-president-harris-announces-new-u-s-initiatives-to-advance-the-safe-and-responsible-use-of-artificial-intelligence/.

[21] Opening Statement of Commissioner Kristin N. Johnson Before the Market Risk Advisory Committee Future of Finance Subcommittee Meeting (Mar. 15, 2024), https://www.cftc.gov/PressRoom/SpeechesTestimony/johnsonstatement031524.