- Executive Summary (90 KB, PDF)
Artificial intelligence (AI) has the potential to significantly improve the delivery of financial services. Several financial authorities have recently began developing frameworks, outlining their expectations on AI governance and use by financial institutions. These frameworks converge on common guiding principles on reliability, accountability, transparency, fairness and ethics. In general, existing high-level governance, risk management and modelling requirements for traditional models already cover these AI principles. The key difference between regulatory requirements for traditional and AI models is the stronger emphasis for the latter on human responsibilities in order to prevent discrimination and other non-ethical decisions. Moreover, while the emerging AI principles are useful, there are growing calls for financial regulators to provide more concrete practical guidance given the challenges in implementing the principles. These challenges include the speed and scale of AI adoption by financial institutions, greater touchpoints with ethical and fairness issues, technical construct of AI algorithms and lack of model explainability. These challenges also call for a proportional and coordinated regulatory and supervisory response. As more specific regulatory approaches and supervisory practices emerge, global standard-setting bodies might be in a better position to develop standards in this area.
JEL classification: C60, G29, G38, O30.
Keywords: artificial intelligence, machine learning, corporate governance, risk management, risk modelling.