A group of Australian academics found out the hard way that AI chatbots don’t always tell the truth and need to be fact-checked.
The group of accounting specialists made their submission to an Australian parliamentary inquiry into the professional accountability and ethics of the consultancy industry.
The academics were lobbying for the big four auditing firms, Deloitte, KPMG, Ernst & Young, and Price Waterhouse Cooper, to be split up.
To justify their argument they needed examples of how these firms had engaged in misconduct and one of their team thought it would be a good idea to ask Bard for some case studies.
Like many other LLMs, Bard is so keen to oblige that if it can’t find an answer for you, it will hallucinate and make one up.
The academics happily added Bard’s hallucinations into their submission without checking their veracity.
Their submission claimed that several partners at KPMG had resigned after the firm was complicit in the “KPMG 7-Eleven wage theft scandal”.
They also claimed that Deloitte was being sued by liquidators of Probuild, a failed building company, as a result of improper auditing. They further claimed that Deloitte falsified the accounts of a company called Patisserie Valerie during an audit.
These, and several other claims, were all false. When the case studies were presented as evidence the auditing firms were quick to point this out.
The false submissions are covered by parliamentary privilege so the auditing firms can’t pursue defamation cases. They did get an awkward apology though.
Oops, sorry
Professor James Guthrie, who had only been using Bard for a week when he decided to employ AI in his work, took responsibility for the faux pas.
“Given that the use of AI has largely led to these inaccuracies, the entire authorship team sincerely apologizes to the committee and the named Big Four partnerships in those parts of the two submissions that used and referenced the Google Bard Large Language model generator,” said Guthrie in his letter to the Senate.
Assuring parliament that he had learned his lesson he said, “I now realize that AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased.”
Meanwhile, Bard probably still thinks it did a good job. Bard has access to real-time internet data so if it reads this article it might realize its mistake at some point.