Mondo Visione Worldwide Financial Markets Intelligence

FTSE Mondo Visione Exchanges Index:

We're Not There Yet: Current Regulation Around AI May Not Be Sufficient - Keynote Address By ASIC Chair Joe Longo At The At UTS Human Technology Institute Shaping Our Future Symposium, 31 January 2024

Date 31/01/2024

KEY POINTS

  • ASIC Chair Joe Longo spoke at the UTS Human Technology Institute Shaping Our Future Symposium on the current and future state of AI regulation and governance.
  • All participants in the financial system have a duty to balance innovation with the responsible, safe, and ethical use of emerging technologies – and existing obligations around good governance and the provision of financial services don’t change with new technology.
  • ASIC will continue to act, within our remit, to deter bad behaviour whenever appropriate and however caused. Our focus is – and will always be – the safety and integrity of the financial system and positive outcomes for consumers and investors.

 

“Existing laws likely do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur.”[1]

These words are from the Federal Government’s interim report on AI regulation. I’m sure most of you are familiar with it. It’s clear, then, that a divide exists between our current regulatory environment and the ideal. Today’s theme of ‘bridging the governance gap’ presupposes such a divide. It invites us to consider what AI governance and regulation might look like in the ideal, how great the divide is between that ideal and current circumstances – and, of course, how we might go about bridging that divide.

But it all hinges on that first question: what would need to be addressed for the regulatory framework to ‘fit the bill’? Or, to put it another way: in what way might the current regulatory framework inadequately prevent AI-facilitated harms? This question is key. We can only bridge the gap – and create our best approximation to the ideal – if we know where that gap lies.

So my purpose today is to look for that gap. But first, I want to make it very clear that any future AI regulatory changes should not be taken to mean that AI isn’t already regulated. It is. And I will devote the first part of my speech to making that clear.

AI is not the Wild West

Earlier this month Microsoft’s AI Tech & Policy Lead in Asia said that “2024 will be the year that we start to build sensible, safe, and expandable regulation around the use of AI technologies.”[2]

While I agree with the sentiment, statements like this imply that AI is some kind of ‘Wild West’, without law or regulation of any kind. Nothing could be further from the truth. As the interim report noted, “businesses and individuals who develop and use AI are already subject to various Australian laws. These include laws such as those relating to privacy, online safety, corporations, intellectual property and anti-discrimination, which apply to all sectors of the economy.”[3]

For example, current directors’ obligations under the Corporations Act aren’t specific duties – they’re principle-based. They apply broadly, and as companies increasingly deploy AI, this is something directors must pay special attention to, in terms of their directors’ duties.

In 2022, the Federal Court found that RI Advice breached its license obligations to act efficiently and fairly by failing to have adequate risk management systems to manage its cybersecurity risks.[4] It’s certainly not a stretch to apply this thinking to the use and operation of AI by financial services licensees. In fact, ASIC is already pursuing an action in which AI-related issues arise, where we believe the use of a demand model was part of an insurance pricing process that led to the full benefit of advertised loyalty discounts not being appropriately applied.[5]

The point is, the responsibility towards good governance is not changed just because the technology is new. Whatever may come, there’s plenty of scope right now for making the best use of our existing regulatory toolkit. And businesses, boards, and directors shouldn’t allow the international discussion around AI regulation to let them think AI isn’t already regulated. Because it is. For this reason, and within our remit, ASIC will continue to act, and act early, to deter bad behaviour whenever appropriate and however caused.

We’re willing to test the regulatory parameters where they’re unclear or where corporations seek to exploit perceived gaps. Among other things, that means probing the oversight, risk management, and governance arrangements entities have in place. We’re already conducting a review into the use of AI in the banking, credit, insurance, and advice sectors. This will give us a better understanding of the actual AI use cases being deployed and developed in the Australian market – and how they impact consumers. We’re testing what risks to consumers licensees are identifying from the use of AI, and how they’re mitigating against these risks.

Is this enough?

But just because existing regulation can apply to AI, that doesn’t mean there’s nothing more to do. Much has already been made of 2024 as ‘the year AI grows up’. Phrases like ‘leaps forward’, ‘rapid progress,’[6] and others abound, suggesting an endless stream of benefits to consumers and businesses in the wake of AI’s growth.

And they’re right. AI continues to be an astonishing development. The potential benefits to businesses and individuals are enormous – with an estimated ‘additional $170 billion to $600 billion a year to Australia’s GDP by 2030.’[7] But that very rapidity brings with it a host of questions.

In 1991, after the launch of the World Wide Web, it took seven years for it to gain 100 million users. When Myspace was launched 12 years later, it hit that milestone in three years. Facebook, YouTube, and Spotify all took four years, and Uber – that great disruptor – took five. Since then, times have decreased dramatically. When TikTok was launched in 2017, it took just nine months to reach 100 million users, while ChatGPT took… just two months.

The open question here is how regulation can adapt to such rapidity. As food for thought, it took two years for the Fair Work Ombudsman to determine that Uber drivers are not employees.[8] This isn’t through any fault or delay – it’s the natural pace of any deliberative and considered regulatory organisation. But there’s a clear question here about whether our current regulatory framework is enough to meet the rapidity of that challenge.

So, even as AI ‘leaps forward,’ at a rate never seen before, questions around transparency and explainability become paramount if we’re to protect consumers from harm – intended or not. Let me consider several questions and risks around the use of AI.

One question may be, will the ‘rapid progress’ of AI carry along with it the vulnerable man or woman struggling to pay their bills in the midst of a cost-of-living crisis, whose credit score is at the whim of AI-driven credit scoring models that may be inadvertently biased?

It isn’t fanciful to imagine that credit providers using AI systems to identify ‘better’ credit risks could (potentially) unfairly discriminate against those vulnerable consumers. And with ‘opaque’ AI systems, the mechanisms by which that discrimination occurs could be difficult to detect. Even if the current laws are sufficient to punish bad action, their ability to prevent the harm might not be.

In such a case, will that person struggling have recourse for appeal? Will they even know that AI was being used? And if they do, who’s to blame? Is it the developers? The company? And how would the company even go about determining whether the decision was made because of algorithmic bias, as opposed to a calculus based on broader data sets than human modelling? Dario Amodei, CEO of the AI company Anthropic, admits freely that “we, humanity, do not know how to understand what’s going on inside these [AI] models.”[9] So if even the experts can’t explain how a particular system works – and it seems this is often the case – how can we justify using it? How can we be sure that vulnerable consumers are part of that great leap forward?

Or let’s consider the use of AI in fraud detection and prevention, with algorithms analysing patterns and anomalies in transactions to detect potentially fraudulent activities in real-time. What happens to the customer who’s debanked when an algorithm says so? What happens when they’re denied a mortgage because an algorithm decides they should be? When that person ends up paying a higher insurance premium, will they know why, or even that they’re paying a higher premium? Will the provider?

And what if a provider lacks adequate governance or supervision of an AI investment manager? When, as a system, it learns to manipulate the market by hitting stop losses, causing market drops and volatility… when there’s a lack of detection systems… yes, our regulations around responsible outsourcing may apply – but have they prevented the harm? Or a provider might use the AI system to carry out some other agenda, like seeking to only support related party product, or share offerings in giving some preference based on historic data. The point is, there's a need for transparency and oversight to prevent unfair practices – accidental or intended. But can our current regulatory framework ensure that happens? I’m not so sure.

Does it prevent blind reliance on AI risk models without human oversight that can lead to underestimating risks? Does it prevent failure to consider emerging risks that the models may not have encountered during training?

In addition to these questions I’ve just posed, the Australian Signals Directorate last week outlined several further challenges around AI use, including:

  1. Data poisoning;
  2. Input manipulation;
  3. AI ‘hallucinations’; and
  4. Privacy and intellectual property concerns[10]

The first three in particular present a governance challenge to any entity using AI, not to become over-reliant on any model that can’t be understood, examined, and explained.

In response to these various challenges, some may suggest solutions such as red-teaming, or ‘AI constitutions’ – the suggestion that AI can be better understood if it has an in-built constitution which it must follow. But even these have been shown to be vulnerable, with one team of researchers breaking through the control measures of several AI models simply by adding random characters on the end of their requests.[11] Another possibility, echoing the EU approach, might be the requirement to complete an ‘AI risk assessment’ before implementing AI in any case.[12] But even here, questions like those I’ve already asked need to be considered to ensure the risk assessment is actually effective in preventing harm.

My point is, these questions of transparency, explainability, and rapidity deserve careful attention. They can’t be answered quickly or off-hand. But they must be addressed if we’re to ensure the advancement of AI means an advancement for all. And as far as financial markets and services are concerned, it’s clear there’s a way to go in answering them.

Conclusion

To sum up: AI, as everyone here knows, is a rapidly and constantly evolving space. But ASIC’s interest is – and will always be – two things:

  1. The safety and integrity of the financial system;
  2. Positive outcomes for consumers and investors.

AI may be able to help us achieve these ends; it can ‘create new jobs, power new industries, boost productivity and benefit consumers’.[13] But, as yet, no clear consensus has emerged on how best to regulate it. Business practices that deliberately or accidentally mislead and deceive consumers have existed for a long time – and are something we have a long history of dealing with. But this risk is exacerbated by the availability of vast consumer data sets and the use of tools such as AI and machine learning which allow for quick iteration and micro-targeting. As new technologies are adopted, monitoring consumer outcomes is crucial.

For now, existing obligations around good governance and the provision of financial services don’t change with new technology. That means all participants in the financial system have a duty to balance innovation with the responsible, safe, and ethical use of emerging technologies.

Bridging the governance gap means strengthening our current regulatory framework where it’s good, and shoring it up where it needs further development. But above it all, it means asking the right questions. And one question we should be asking ourselves again and again is this: “is this enough?”

 

[1] Safe and Responsible AI in Australia Consultation: Australian Government’s Interim Response, p. 5

[2] Lucio Ribeiro, “Decoding 2024: Experts unravel AI’s next big phase”, Forbes Australiahttps://www.forbes.com.au/news/innovation/decoding-2024-experts-unravel-ais-next-big-phase/

[3] Safe and Responsible AI in Australia Consultation: Australian Government’s Interim Response, p. 15

[4] Australian Securities and Investments Commission v RI Advice Group Pty Ltd

[5] ASIC’s recent action against IAG  ASIC states that IAG subsidiaries between January 2017 and December renewed over 1 million home insurance policies for brands including SGIO, SGIC and RACV

[6] Robots Learn, Chatbots Visualize: How 2024 Will Be A.I.’s ‘Leap Forward’ https://www.nytimes.com/2024/01/08/technology/ai-robots-chatbots-2024.html

[7] Safe and Responsible AI in Australia Consultation: Australian Government’s Interim Response, p. 4

[8] “Uber Australia investigation finalised” https://www.fairwork.gov.au/newsroom/media-releases/2019-media-releases/june-2019/20190607-uber-media-release

[9] Madhumita Murgia, “Broken ‘guardrails’ for AI systems lead to push for new safety measures,” Financial Times, 7 October 2023, https://www.ft.com/content/f23e59a2-4cad-43a5-aed9-3aea6426d0f2

[10] Australian Signals Directorate, Engaging with Artificial Intelligence (AI), 24 January 2024 (accessed 25 January 2024) https://www.cyber.gov.au/resources-business-and-government/governance-and-user-education/governance/engaging-with-artificial-intelligence

[11] Madhumita Murgia, “Broken ‘guardrails’”, op. cit.

[12] Cfhttps://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf

[13] Safe and Responsible AI, op. cit., p. 18