Australian academics apologize for false AI-generated claims

November 4, 2023

A group of Australian academics found out the hard way that AI chatbots don’t always tell the truth and need to be fact-checked.

The group of accounting specialists made their submission to an Australian parliamentary inquiry into the professional accountability and ethics of the consultancy industry.

The academics were lobbying for the big four auditing firms, Deloitte, KPMG, Ernst & Young, and Price Waterhouse Cooper, to be split up.

To justify their argument they needed examples of how these firms had engaged in misconduct and one of their team thought it would be a good idea to ask Bard for some case studies.

Like many other LLMs, Bard is so keen to oblige that if it can’t find an answer for you, it will hallucinate and make one up.

The academics happily added Bard’s hallucinations into their submission without checking their veracity.

Their submission claimed that several partners at KPMG had resigned after the firm was complicit in the “KPMG 7-Eleven wage theft scandal”.

They also claimed that Deloitte was being sued by liquidators of Probuild, a failed building company, as a result of improper auditing. They further claimed that Deloitte falsified the accounts of a company called Patisserie Valerie during an audit.

These, and several other claims, were all false. When the case studies were presented as evidence the auditing firms were quick to point this out.

The false submissions are covered by parliamentary privilege so the auditing firms can’t pursue defamation cases. They did get an awkward apology though.

Oops, sorry

Professor James Guthrie, who had only been using Bard for a week when he decided to employ AI in his work, took responsibility for the faux pas.

“Given that the use of AI has largely led to these inaccuracies, the entire authorship team sincerely apologizes to the committee and the named Big Four partnerships in those parts of the two submissions that used and referenced the Google Bard Large Language model generator,” said Guthrie in his letter to the Senate.

Assuring parliament that he had learned his lesson he said, “I now realize that AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased.”

Meanwhile, Bard probably still thinks it did a good job. Bard has access to real-time internet data so if it reads this article it might realize its mistake at some point.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions