Australia considering mandatory guardrails for “high-risk” AI

January 17, 2024

Australia is considering imposing mandatory guardrails on the development and deployment of AI in “high-risk” settings in response to concerns raised during public consultations.

Minister for Industry and Science Ed Husic said, “Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled.”

The measures the Australian government proposed implementing were published in a report titled: “Safe and responsible AI in Australia consultation.”

The report stated, “The government will consider mandatory safeguards for those who develop or deploy AI systems in legitimate, high-risk settings. This will help ensure AI systems are safe when harms are difficult or impossible to reverse.”

It acknowledged that there were diverse views on what constituted “high-risk” settings but offered the list adopted in the EU AI Act as examples:

  • certain critical infrastructure (water, gas, electricity)
  • medical devices
  • systems determining access to educational institutions or recruiting people
  • systems used in law enforcement, border control, and administration of justice
  • biometric identification
  • emotion recognition.

In addition, it gave examples such as where AI could be used to predict a person’s likelihood to re-offend, judge a person’s suitability for a job, or control a self-driving vehicle.

If an AI glitch could cause irreversible damage then it proposes that there should be mandatory laws defining how the tech is developed and deployed.

Some of the proposed guardrails include digital labels or watermarks to identify AI-generated content, ‘human-in-the-loop’ requirements, and even outright bans of AI uses that present unacceptable risks.

Proposed examples of unacceptable applications included behavioral manipulation, social scoring, and real-time widescale facial recognition.

Voluntary regulation for now

It may be some time before the proposed mandatory requirements are drafted into law. Meanwhile, in an interview with ABC News, Husic said, “We want to design voluntary safety standards right away and be able to work on that and be able to get industry to understand what they’re supposed to deliver and how they’re supposed to deliver it.”

In their submissions, OpenAI and Microsoft supported voluntary regulations rather than rushing to implement mandatory ones.

Professor of Artificial Intelligence at the University of New South Wales Toby Walsh was critical of this approach and the lack of concrete steps in the interim report.

Professor Walsh said, “It’s a little little, and it’s a little late. There isn’t much concrete, and much of it is in terms of voluntary agreements with industry, and as we’ve seen in the past, having them mark their own homework is perhaps not the best idea.”

The report stated that “adopting AI and automation could add an additional $170 billion to $600 billion a year to Australia’s GDP by 2030.”

In a country that is notoriously highly regulated, achieving those figures may be difficult if Australia burdens AI developers with too much red tape.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions