Powered by MOMENTUM MEDIA
  • subs-bellGet the latest news! Subscribe to the ifa bulletin

RIAA launches toolkit to understand AI risks

Advisers will have more help in understanding the risks presented by artificial intelligence thanks to a toolkit developed by the Responsible Investment Association Australasia.

The Responsible Investment Association Australasia (RIAA) has launched a toolkit to help advisers address the risks presented by artificial intelligence (AI).

Its Artificial Intelligence and Human Rights Investor Toolkit was developed by the organisation’s Human Rights Working Group.

The toolkit outlines the issues, provides case studies, outlines methodologies for understanding risks, and details strategies and guidance for investor engagement on the matter. It will also help firms understand how they implement and deploy AI ethically and responsibly.

Its development came about after attendees at RIAA’s national conference last year raised concerns about digital technology issues such as privacy, data protection, online safety, and political participation and the need for a resource to help them understand.

RIAA said the need for the toolkit was also prompted by the “skyrocketing” use of AI across industries – including financial services – and the risks presented by algorithms, privacy breaches, and reputational damage.

Estelle Parker, Co-chief executive officer at RIAA, said: “The potential benefits of AI are immense, but investors are increasingly aware of the risks posed by AI, especially when it is inadequately designed, inappropriately used or maliciously deployed. Not only does AI pose risks to individuals and their human rights, but there are also ethical and financial risks to companies and investors as this technology evolves.

==
==

“From powering algorithms through to enabling deepfake pornography, the vast potential of AI is linked to a wide range of human rights risks. As investors, we need to be aware of how these are emerging and what we can do to address them.

“Through active corporate engagement, investors can communicate concerns and priorities to a company’s leadership and encourage better business practices. In turn, this helps to protect the long-term returns of clients and beneficiaries. Once investors understand their exposure to adverse human rights impacts and flow-on risks, they are better-placed to prioritise engagement based on their portfolio’s most salient human rights issues.”

Liza McDonald, head of responsible investment at Aware Super and co-chair of the RIAA subgroup, said: “As one of Australia’s largest profit-for-member super funds and an early adopter in the pension industry, Aware Super uses artificial intelligence prudently to help in designing more efficient services for our 1.1 million members.

“We recognise the risks AI’s rapid evolution has created. Our members trust us to do all that is practical to safeguard their super and we continually invest in robust technical and human security measures, in collaboration with the appropriate regulatory authorities, to help mitigate these threats.”

RIAA has over 500 members representing US$29 trillion in assets under management.

Earlier this year, Australian Securities and Investments Commission (ASIC) chair Joe Longo said the regulator is conducting a review into the use of AI by financial advice firms in order to prevent AI harm to consumers.

He said ASIC is already conducting a review into the use of AI in the banking, credit, insurance and financial advice sectors, testing what risk to consumers are being identified by licensees and how licensees are mitigating these.

Challenges to AI use include data poisoning, input manipulation, AI “hallucinations” and privacy and intellectual property concerns. A possible option to mitigate these risks, Longo suggested, would be the use of an AI risk assessment before implementing the use of AI, but he acknowledged there would need to be questions asked as to its effectiveness.