Global accounting and consulting firm Deloitte is refunding part of a $440,000 (AUD) taxpayer-funded contract after its Australian division admitted that an official government report contained fake citations and AI-generated content. The embarrassing revelation marks the latest instance of a major firm suffering reputational damage for the careless use of artificial intelligence tools.
According to Ars Technica, Deloitte Australia used Microsoft’s Azure OpenAI GPT-4o model to produce sections of its “Targeted Compliance Framework Assurance Review” for the Department of Employment and Workplace Relations (DEWR). The report, released in August, was intended to assess government compliance processes but was quickly exposed as containing “hallucinated” references and fabricated quotes.
Chris Rudge, Deputy Director of Health Law at the University of Sydney, discovered multiple citations that didn’t exist, including several falsely attributed to Professor Lisa Burton Crawford of the university’s law school. Crawford raised concerns about being linked to nonexistent research and demanded an explanation.
After the scandal broke, Deloitte issued an updated version of the report, admitting to “a small number of corrections to references and footnotes.” Fourteen citations were removed from the original 141, including a fake quote attributed to Australian federal justice Jennifer Davies — misspelled as “Davis” in the original draft. The revised report acknowledged the use of “a generative AI large language model (Azure OpenAI GPT-4o) tool chain” in the project’s analytical process.
Deloitte confirmed it would repay the final installment of the contract, though it did not specify how much that refund represents. DEWR stated that the report’s overall recommendations remain unchanged, but critics were unconvinced.
Rudge slammed Deloitte’s reliance on AI, saying the report’s findings “cannot be trusted” when built on a “flawed, undisclosed, and non-expert methodology.” He argued that the episode raises “serious credibility concerns” about corporate overreliance on generative AI for technical or legal analysis.
This incident follows several other AI-related blunders in the professional world, particularly within the legal sector. In one high-profile case, a Morgan & Morgan attorney cited eight nonexistent court cases generated by ChatGPT in a lawsuit against Walmart. The firm’s leadership later described the episode as “nauseatingly frightening” and disciplined the attorney involved.
The Deloitte debacle underscores a growing global concern: corporations are increasingly turning to AI tools for efficiency, only to discover that such shortcuts can erode public trust, misinform governments, and lead to costly reputational fallout.