Judge blasts AI misuse in high-profile court case

A severe rebuke from Australia’s Supreme Court of Victoria has highlighted the growing dangers of artificial intelligence misuse in legal proceedings, after defense lawyers submitted court documents riddled with AI-generated fabrications in a high-profile murder case.

The extraordinary blunder forced Justice James Elliott to delay his ruling by 24 hours when he discovered that submissions contained fake legal citations, nonexistent case law, and fabricated parliamentary quotes. These were all produced by artificial intelligence without proper human oversight.

Defense lawyer Rishi Nathwani KC was compelled to make a public apology to the court, accepting “full responsibility” for the AI-related errors that derailed proceedings in the case of a 16-year-old accused of murder.

“We are deeply sorry and embarrassed for what occurred,” Nathwani told the court, as Justice Elliott expressed his dismay at the unprecedented situation.

The judge did not mince words in his criticism of the legal team’s reliance on unvetted AI-generated content. “At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,” Elliott stated on August 14, revealing that even the revised submissions still contained fictional legislation created by AI.

“Use of AI without careful oversight of counsel would seriously undermine this court’s ability to deliver justice,” the judge warned, underscoring the fundamental threat such practices pose to the integrity of legal proceedings.

The case involved serious allegations against a teenager who prosecutors claimed had conspired to kill a 41-year-old woman in Abbotsford in April 2023, allegedly to steal her vehicle and fund what they described as plans for an “anti-communist army.” The defendant, who cannot be identified due to legal restrictions, was ultimately found not guilty by reason of mental impairment, with evidence showing he suffered from untreated schizophrenia and grandiose delusions at the time of the offense.

Court documents revealed that both Nathwani and junior barrister Amelia Beech had failed to properly review their AI-generated submissions before filing them. The artificial intelligence system had fabricated multiple elements of their legal arguments, including nonexistent case judgments, misrepresented parliamentary speeches, and references to laws that were never actually passed by legislators.

The problem was compounded when prosecutors, using the defense submissions as a foundation for their own arguments, also failed to verify the accuracy of the information. This created a cascade effect where both sides presented fundamentally flawed legal arguments based on fictional AI-generated content.

The incident represents a troubling addition to an expanding catalog of artificial intelligence-related failures in courtrooms worldwide.

Legal experts are increasingly concerned about the uncritical adoption of AI tools in legal practice, particularly when lawyers fail to implement adequate safeguards and verification processes. Artificial intelligence, while potentially useful, cannot replace the careful legal research and analysis that forms the backbone of effective advocacy.

The 16-year-old defendant will remain under supervision in a youth justice facility following the court’s determination that his mental health condition at the time of the incident warranted treatment rather than punishment.