Good afternoon. Thank you to President Lorie Logan, Senior Vice President and Senior Advisor to the President Sam Schulhofer-Wohl, and the Federal Reserve Bank of Dallas for hosting us. Consistent with the title selected for the Symposium, today’s discussion will explore AI Risks and Opportunities Across the Digital and Cyber Landscape, including a broad range of topics focused on fostering responsible innovation, as well as topics focused on proactively addressing potential risks. As always allow me to share a standard disclaimer. My views are my own and not necessarily the views of the Commission, Commission staff or my fellow Commissioners.
This morning, I gave a livestream interview from my hotel with Reunion Tower standing tall behind me, offering an impressive landmark as background for the interview. For those of you who are not familiar, Reunion Tower is an iconic symbol in the Dallas skyline. Like Reunion Tower and the breathtaking 360-degree view it provides, our smart approach to supervision of financial markets has enabled us to create and boast the deepest and most liquid capital and derivatives markets in the world while still maintaining the ability to see the market from any angle. How have we achieved these goals? We have harnessed lessons from the customs and traditions that built successful market and prudential supervision and oversight for over one hundred years under federal legislation and for over two hundred and fifty years since the founding of our nation. At the same time, we are forward-looking, appreciating the innovative design and potential for technology to shape enduring, healthy, competitive financial markets that foster market integrity and stability and promote customer and investor protection.
It is an honor to be here and to see so many familiar faces, including market and prudential regulators, industry representatives from traditional financial services firms and emerging technologies, academics, and public interest advocates. Any successful convening on the issues that we will tackle today requires a multi-stakeholder dialogue drawing on all corners to help us ensure that supervision and oversight are best-in-class and fit-for-purpose.
As I intimated, today’s Symposium will explore topics that are at the core of our markets and reflect the future of finance. In my time as a Commissioner, and for decades prior to my public service, I have worked to ensure first-best outcomes for our economy, customer protection, and industry initiatives in these areas.
AI: Generating New Buzz
Over the last few decades, we have witnessed the evolution of a number of technologies. While thoughts of artificial intelligence, automation, and robotics have long populated sci-fi novels and films, it was only during the last half-century that sentient technology became an increasing feature in financial markets. The advent and advances in computer technology and computing capabilities have significantly accelerated the adoption of various forms of AI in financial markets and enhanced the efficiency and execution of various back-office and compliance functions that were sources of consternation and crises forty or fifty years ago.
Three distinct phases of AI have marked the most recent chapter of financial markets development and evolution – the creation of supervised and unsupervised machine learning algorithms, the creation of generative AI (GenAI), and most recently, the launch of agentic AI. As we transverse the most recent stages of these innovative developments, I think that expert, industry, and customer protection driven dialogues are essential to the creation of any potential regulation or simply effective oversight and supervision of financial markets. I am looking forward to hearing from panelists today regarding the potential and possible limitations of the most cutting-edge aspects of this most recent phase AI of developments.
GenAI
A Treasury report focused on AI-based cybersecurity risks in the financial services sector notes that:
The term “Generative AI” means the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.[1]
In general, a user inputs a specific prompt into an interface to produce synthetic content. Tools like ChatGPT and Claude apply this model to produce text, audio, and images based on the input. As we all quickly noticed, GenAI has real limitations. For example, non-determinism, or the potential for different outputs to result from the same input, and hallucinations – that is, notwithstanding reliance on incredibly large amounts of data gathered from the internet, GenAI models may generate false information that is highly persuasive.[2]
Notwithstanding a general propensity to be accurate, current GenAI models may not comprehend certain real-world roadblocks because these models rely heavily on user input and training data to predict patterns.
For example, a GenAI model trained on a LLM similar to the LLMs that enable GPT-4 can successfully offer highly accurate driving directions in New York City. However, when adding street closures or detours (both of which are common in many cities) the models struggled to achieve the same performance level and the accuracy of the models’ predictions were drastically reduced.[3]
There is tremendous potential for GenAI to facilitate execution of regulatory reporting and compliance obligations. Regulators supervising markets may use GenAI for supervisory technology (SupTech) to better enable oversight of know-your-customer (KYC) and anti-money laundering (AML) compliance, to expedite routine reporting and to enable efficient review of responses and comment letters issued in connection with requests for information or comment on important, timely issues emerging in financial markets.
Agentic AI
More recent efforts of technologists have generated a next-level AI model that does more than generate synthetic content. Agentic AI endeavors to make decisions, take actions, and adapt to changing inputs. So, for example, an agentic AI model would not be thumped by the road closures and detours that crop up on a map of busy New York City streets. An agentic AI model can tackle these new obstacles, adapting as the information inputs regarding routing change.
Agentic AI introduces AI agents designed to complete tasks in an autonomous manner. According to the Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Lab (CSAIL), Agentic AI is “designed to pursue complex goals with autonomy and predictability” by “taking goal-directed actions, making contextual decisions, and adjusting plans based on changing conditions with minimal human oversight” to enhance productivity.[4] What does this mean for those of us who do not have an advanced degree in computer science and artificial intelligence from MIT? Agentic AI focuses on the creation and utilization of autonomous, task-based agents to showcase AI’s ability to do, rather than to just create.
Potential applications of this technology are widespread and include healthcare (identifying, mapping, monitoring, and predicting disease prognosis), global logistics (rerouting to optimize shipped commodities due to weather, geopolitical events, or other exogenous events in a supply chain), and even simply, creature comfort energy optimization (adjusting heating, air conditioning, and lighting for maximum efficiency). In the financial services industry, and broadly our markets, the potential found in agentic AI presents an array of cost savings and efficiencies to be had with the proper implementation of this technology. For example, manual transaction reviews typically conducted in different types of auditing can be completed by AI agents who autonomously scan financial statements and flag those transactions which do not comply with their respective regulations. Credit scoring models, which typically rely on static data, now have the potential to rely on real-time transaction data, behavior trends and economic indicators and can continuously monitor credit instead of providing credit snap shots.[5] Agentic AI can also be used to create processes to improve efficiencies in customer interactions through automation in financial planning and optimization of client communications, and in market intelligence by monitoring the vast data produced by the markets each day and analyzing the data for notable shifts to alert analysts for opportunities and risks.[6] More importantly, from a regulator's perspective, at least, properly architected agentic AI systems can produce robust compliance and fraud prevention systems, including those that can monitor for AML risks by flagging and dynamically intervening in high-risk transactions, automating claims triaging and refining risk assessments in claims and underwriting, tracking real-time market threats and making risk mitigation recommendations with robust data sets, end and even identifying bugs, deploying automatic updates, and ensuring compliance with software compliance testing in real time.[7]
In the context of producing systems that can complete tasks without human oversight, like creating robust compliance and reporting systems that can create tangible operational efficiencies and increase compliance with applicable regulations, agentic AI builds upon GenAI in every discernable way. It does so by being distinct from GenAI in four ways: a focus on action and decision-making rather than creating synthetic data and content; removal of the necessity to continuously input prompts; an ability to act independently to carry out activities and tasks within its parameters; and, compared to GenAI whose programs are static once trained, the ability to continuously change and remain dynamic by adjusting to data and learning from its own mistakes.[8]
But with every great opportunity comes risk. Agentic AI suffers from a vulnerability in that outputs are only as good as inputs – meaning, if the training model data is biased, incomplete, or otherwise compromised, agentic AI outputs may be similarly inadequate.
Perhaps more immediately concerning for regulators who are cops on the financial markets beat, as the potential for positive, efficient, market-enhancing use cases AI grow, so too does the potential for misuse of the same technology by bad actors. The increasing power of GenAI to create synthetic data, which might be inaccurately produced due to purposeful prompting by a bad actor or produced due to its own vulnerabilities and insufficient data sets, has created the ability to insert misleading or malicious data which might lead to hallucinations in output from the AI agents. Because they work autonomously, if improperly architected, this has the potential to create a continuous loop of improper data and feedback, effectively poisoning the model’s own data. Further, agentic AI suffers some of the same vulnerabilities and risks to that of GenAI, including privacy concerns over the vast amounts of data used to fuel the algorithms and data learning sets, risks associated with fairness and bias due to incomplete or over representative data, and to data leakages and model inference attacks which can leak sensitive data.[9]
Other risks that should be carefully considered as agentic AI models are integrated into our markets include the limitations of synthetic data, data leakages, data integrity, data security, data privacy, ethical concerns, the absence of a human in the loop, security vulnerabilities (hijacking or exploitation), and accountability among others.
Cyber Threats: The AI Problem and Solution
Over the course of my service, discussions of cybersecurity and artificial intelligence have become increasingly intertwined. I have closely followed these topics and the increasing volume and severity of cyberattacks in part due to the rise in AI used by bad actors to perpetrate these attacks. Over the last year in particular, several reports highlight the rise in cyberthreats across financial markets and discuss potential risks that cyber threats pose.[10] I have continuously advocated for the Commission to take a leading role among domestic and international regulators in addressing these issues to ensure that our market participants are prepared, and in turn, that our overall markets remain resilient.
In April, my remarks at an AI summit highlighted findings from the Treasury report on AI-fueled cyber and fraud threats that pose significant risks to our markets, including AI-driven fraud, vulnerabilities of technology, and synthetic identities and impersonation. In the speech, I called for regulators to collaborate and coordinate efforts to identify a responsible path for introducing responsible innovation in our markets.[11]
A recent FSOC Report notes gaps in financial institutions’ cybersecurity preparedness, risk management, and business continuity practices with respect to AI. The report notes, “AI’s data intensity and higher complexity, as well as increased reliance on third-party vendors of AI technology can complicate the ability to fend off attacks.”[12]
The FSOC Report explains that “[c]yberthreat actors may also be able to use AI tools, such as generative AI, to enable attacks on the financial services sector, particularly through the use of social engineering, malware generation, vulnerability discovery, and disinformation. While these cyber attacks are neither new nor unique to AI, AI tools may make these attacks much easier for a less sophisticated adversary.”[13] In December 2024, the Treasury Department released an additional report on AI in financial services highlighting uses of AI by financial services firms. That report notes that “AI is widely used for cybersecurity risk management…including analyzing large sets of data, detecting anomalies, flagging suspicious activities, and verifying customer identities under Bank Secrecy Act (BSA) obligations” and goes on to note that “Generative AI has been deployed to complement an investigation platform in collating and summarizing data and automating report creation and filing. AI is also being used in compliance with risk management guidelines, including managing operational risks, meeting capital and liquidity standards, improving stress test scenarios, and enhancing forecasting accuracy.”[14]
As agentic AI comes into focus, it may present new opportunities to build upon the systems that financial services firms may already be working on and enable these tools to be more tailored to their specific organizations.
As I continue to study these issues and engage with market participants, AI has increasingly been discussed as a potential mitigant to the very risks that the technology creates in other contexts. In fact, AI is being discussed not just as a potential benefit, but possibly a necessary element to fighting AI-driven cyberthreats. I am reminded of a saying I heard at a prior event on this topic, that firms need to be able to “fight fire with fire.” In my remarks in April, I encouraged regulators to focus on how we may be able to use AI to combat cybersecurity and fraud threats. In other words, AI may offer useful SupTech solutions to detect fraud and market manipulation.
Market participants have already been using AI for compliance and supervision functions, and we may expect that number to increase. For example, the FSB Report notes that financial institutions are using AI for compliance with fraud and AML/CFT requirements in more and more varied use cases. The report notes that “[a]lthough the use of AI models to comply with AML/CFT requirements and to perform fraud detection were already identified in the 2017 report, they have been more widely deployed since then to facilitate investigations into sanctions evasion, to identify misuse of legal persons and legal arrangements, to uncover trade fraud and trade-based money laundering, and to detect tax evasion, fraud/scams, and money mules.”[15] The report discusses some enhanced benefits of generative AI, and our discussion today may show why agentic AI can even go a step further. Similarly, a recent consultation report published by the International Organization of Securities Commissions (IOSCO) on AI in capital markets reports from a survey of IOSCO members and self-regulatory organizations that market participants are not only using AI “to enhance the effectiveness of AML and CFT measures,” but in addition to other compliance uses, specifically using AI for cybersecurity, including “for vulnerability, threat, phishing, and anomaly detection; for automated response and authentication; in risk management and compliance surveillance activities; and to assist with the detection and prevention of frauds and scams.”[16]
On the regulators’ side, there are also opportunities to use AI to enhance our ability to carry out our missions. The FSB Report notes that “Supervisory authorities’ use of SupTech has increased, with 59% of authorities surveyed using various applications in 2023, a 5- percentage point increase from 2022.”[17] With the data that it collects and its responsibilities for market oversight, it is easy to imagine how the CFTC could start to explore how SupTech could facilitate the agency in advancing many aspects of its mission.
TPRM: Market Risks and Beyond
As we hear from a truly impressive group of experts today about how some of these new technologies are being integrated into their organizations, and how at a micro and macro level these innovations may be capable of (and in some cases already have) changing how we operate or interact with different players in our markets, I would ask you to consider not just the big picture of what the technology or the outcome may be but what goes into making that happen. And in many cases, we will see that critical third-party vendors are an integral part of that – in some cases, the technology itself will come from a vendor, and in others, it may be an important input, such as data centers or cloud storage. It is important to highlight a number of potential risks that may relate to third-party risk management, such as concentration risk among a limited number of providers.[18]
As I have discussed previously, MRAC has been at the forefront of the Commission’s efforts to address the importance of cyber resilience for market participants, central counterparties and the broader market and economy. In March 2023, MRAC held a “first-of-its-kind” public meeting to discuss the cybersecurity event at ION Cleared Derivatives that led to a ripple effect across our markets. This was the first chance for experts across our industry to come together to evaluate the event as well as begin to map out next steps to ensure cyber preparedness among market participants, service providers, and other sources that have the potential to impact our markets.
After the March 2023 meeting, both the Commission and the MRAC got to work on addressing the cyber resilience of market participants. The Commission developed a proposed rule that would implement an operational resilience framework for futures commission merchants, swap dealers, and major swap participants, but did not focus on similar cyber risk in other areas, such as DCOs. The CCP Risk & Governance Committee took up the mantel where the Commission left off and developed recommendations that highlight the importance of cyber resilience in DCOs and the need for a more robust regulatory framework. These recommendations, which the MRAC voted to advance to the Commission, would expand upon the existing framework and require DCOs establish, implement and maintain a third-party relationship management program.
CFTC Rule 39.18, establishing system safeguard standards for DCOs, addresses outsourcing but does not expressly discuss third-party relationships; the CCP Risk and Governance recommendations would build upon the framework of Rule 39.18 by adding a third-party risk management program to (b)(2). The proposed language notes that “[a] robust TPRM program should identify, assess, mitigate and monitor the full scope of risks that the use of third party arrangements through implementation” at a minimum of certain enumerated principes, including, among other things, written policies and procedures that over the entire lifecycle of the third-party relationship, personnel with expertise to monitor the third-party service provider, onboarding and diligence before onboarding and exit strategies and alternatives before termination, risk-based monitoring, and more.[19]
The recommendations build upon the principle-based approach of the Core Principles as well as lessons learned and best practices from voices across the industry as well as international standard setting bodies. As noted in the report
“These principles are intended to reflect lessons learned from industry efforts and best practices in derivatives, the guidance notes in Form DCO, the NFA interpretive guidance, lessons learned from the wider context of third-party relationship management, as well as the principles enunciated in the PFMIs. Incorporating these principles in Commission regulations would enable the Commission to update its regulatory framework with respect to critical third party service providers and to bring its regulations in line with internationally accepted standards, while maintaining a principles based approach to regulation.”[20]
Cyber resilience is a critical gateway issue for protecting market integrity. At the risk of sounding like a broken record, I urge everyone to be thoughtful about these issues and what steps we can take to strengthen market participants and our broader derivatives and global financial markets. Effectively combatting cyber threats will require a coordinated effort among regulators and industry, and I believe there is a lot we can accomplish across a number of different areas, ranging from considering best practices for governance and effective risk management to leveraging technology through SupTech or RegTech innovations.
Conclusion
Reunion Tower stands tall and strong in Dallas largely because it is built on a solid foundation. As we think about integrating innovative technologies into our markets and as we focus on cyber resilience and third-party risk management, as well as the benefits and threats of AI-enhanced cybersecurity, I look forward to collaborating with different regulators, industry experts, and academics at roundtables and events like this one to continue to study these issues. My hope is that we can continue to advance a shared understanding of the risks and opportunities to develop best practices or to use these technologies to monitor and fight back against cyber threats.
[1] Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector, U.S. Dept. of the Treasury (Mar. 2024), https://home.treasury.gov/system/files/136/Managing-Artificial-Intelligence-Specific-Cybersecurity-Risks-In-The-Financial-Services-Sector.pdf.
[2] Gonçalo Ribeiro, Understanding The Limitations of Generative AI, Forbes (May 9, 2024), https://www.forbes.com/councils/forbestechcouncil/2024/05/09/understanding-the-limitations-of-generative-ai/.
[3] Justin Y. Chen et al., Evaluating the World Model Implicit in a Generative Model (June 6, 2024), https://arxiv.org/pdf/2406.03689.
[4] Audrey Woods, Agentic AI: What you need to know about AI Agents, MIT CSAIL Alliances, https://cap.csail.mit.edu/agentic-ai-what-you-need-know-about-ai-agents.
[5] Bryan Zhang and Kieran Garvey, Agentic AI will be the real banking disruptor, The Banker (Feb. 25, 2025), https://www.thebanker.com/content/886b880f-fc01-458d-81a5-4ad4c27815da.
[6] Kieran Garvey et al., How Agentic AI will transform financial services with autonomy, efficiency and inclusion, World Economic Forum (Dec. 2, 2024), https://www.weforum.org/stories/2024/12/agentic-ai-financial-services-autonomy-efficiency-and-inclusion/.
[7] Id.
[8] Audrey Woods, Agentic AI: Whatt you need to know about AI Agents, MIT CSAIL Alliances, https://cap.csail.mit.edu/agentic-ai-what-you-need-know-about-ai-agents.
[9] Emerging Risks and Opportunities of Generative AI for Banks – A Singapore Perspective, Mindforge, https://www.mas.gov.sg/-/media/mas-media-library/schemes-and-initiatives/ftig/project-mindforge/emerging-risks-and-opportunities-of-generative-ai-for-banks.pdf.
[10] See, e.g., U.S. Dep’t of the Treasury, Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (Mar. 2024), https://home.treasury.gov/system/files/136/Managing-Artificial-Intelligence-Specific-Cybersecurity-Risks-In-The-Financial-Services-Sector.pdf (Treasury Report); Financial Stability Oversight Council, Annual Report (Dec. 6, 2024), https://home.treasury.gov/system/files/261/FSOC2024AnnualReport.pdf (FSOC Report); Financial Stability Board, The Financial Stability Implications of Artificial Intelligence (Nov. 14, 2024), https://www.fsb.org/uploads/P14112024.pdf (FSB Report).
[11] Opening Remarks of Commissioner Kristin N. Johnson at GAIM Ops AI Summit: Using AI To Combat Cybersecurity and Fraud Risks (Apr. 7, 2025), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson17.
[12] FSOC Report at 86 (citation omitted).
[13] Id.
[14] U.S. Dep’t of the Treasury, (Dec. 2024), https://home.treasury.gov/system/files/136/Artificial-Intelligence-in-Financial-Services.pdf (citation omitted).
[15] FSB Report at 12 (citation omitted).
[16] IOSCO, Artificial Intelligence in Capital Markets: Use Cases, Risks, and Challenges: Consultation Report (Mar. 2025) at 21-22, https://www.iosco.org/library/pubdocs/pdf/IOSCOPD788.pdf (IOSCO Report),
[17] FSB Report at 13 (citing Cambridge Centre for Alternative Finance (2023), Cambridge SupTech Lab: State of SupTech Report 2023).
[18] See, e.g., Keynote Remarks of Commissioner Johnson for Governing Data at Iowa Innovation and Business Law Center and Yale Law Journal of Law & Technology at Yale Law School: Twin Peaks – Emerging Technologies (AI) and Critical Third Parties (Apr. 4, 2025), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson16; Opening Remarks of Commissioner Kristin N. Johnson at GAIM Ops AI Summit: Using AI To Combat Cybersecurity and Fraud Risks (Apr. 7, 2025), https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson17.
[19] Market Risk Advisory Committee, Recommendations on DCO System Safeguards Standards for Third Parties (Dec. 2024), https://www.cftc.gov/media/11666/mrac121024_DCOThirdPartySystemSafeguards/download (MRAC DCO System Safeguards Report).
[20] Id.