Powered by MOMENTUM MEDIA
  • subs-bellGet the latest news! Subscribe to the ifa bulletin

ASIC flags concerns around AI governance risks for licensees

The proliferation of artificial intelligence in financial services opens the sector up to possible issues if licensees don’t keep a handle on their governance practices, according to ASIC.

In its Report 798 Beware the gap: Governance arrangements in the face of AI innovation, the Australian Securities and Investments Commission (ASIC) warned financial services and credit licensees that, despite current AI usage being “relatively cautious”, there is a risk that their governance approach to the technology lags behind its adoption.

As part of its review, ASIC looked at AI use across 23 licensees in the financial advice, retail banking, credit, and general and life insurance sectors, where AI interacted with or impacted consumers.

ASIC chair Joe Longo said making sure governance frameworks are updated for the planned use of AI is crucial to licensees meeting future challenges posed by the technology.

“Our review shows AI use by the licensees has to date focused predominantly on supporting human decisions and improving efficiencies. However, the volume of AI use is accelerating rapidly, with around 60 per cent of licensees intending to ramp up AI usage, which could change the way AI impacts consumers,” Longo said.

According to the report, nearly half of licensees did not have policies in place that considered consumer fairness or bias, and even fewer had policies governing the disclosure of AI use to consumers.

This was in contrast to the “rapid acceleration in the volume of AI use cases”, with many also shifting towards “more complex and opaque types of AI”, such as generative AI.

==
==

“It is clear that work needs to be done – and quickly – to ensure governance is adequate for the potential surge in consumer-facing AI,” Longo said, noting that the increase in AI usage holds the potential for significant benefits.

“When it comes to balancing innovation with the responsible, safe and ethical use of AI, there is the potential for a governance gap – one that risks widening if AI adoption outpaces governance in response to competitive pressures,” he said.

“Without appropriate governance, we risk seeing misinformation, unintended discrimination or bias, manipulation of consumer sentiment and data security and privacy failures, all of which has the potential to cause consumer harm and damage to market confidence.”

For licensees of all types, the regulator stressed that they need to consider their existing obligations and duties when it comes to the deployment of AI, rather than waiting for AI laws and regulations to be introduced.

“Existing consumer protection provisions, director duties and licensee obligations put the onus on institutions to ensure they have appropriate governance frameworks and compliance measures in place to deal with the use of new technologies,” Longo said.

“This includes proper and ongoing due diligence to mitigate third-party AI supplier risk.

“We want to see licensees harness the potential for AI in a safe and responsible manner – one that benefits consumers and financial markets. This can only happen if adequate governance arrangements are in place before AI is deployed.”

The reliance on third parties, ASIC said in the report, allowed for licensees to overcome limitations in resourcing and technical skills, especially for smaller licensees; however, “improperly managed third-party models can introduce risks, such as a lack of transparency and control, and security and privacy concerns”.

The regulator added that “understanding and responding to the use of AI by financial firms is a key focus for ASIC”, adding that it would continue to monitor how licensees use AI.

“Where there is misconduct, ASIC will take enforcement action if appropriate and where necessary,” ASIC said.