Police is increasingly using Generative AI to Write Police Reports Despite Great Dangers

Police departments across the United States are rapidly adopting artificial intelligence tools to write their reports, raising serious concerns about accuracy, transparency, and potential miscarriages of justice.

The technology, which converts body camera audio into written reports, has spread through law enforcement agencies despite warnings from prosecutors and civil liberties advocates. The most widely used system is Axon’s Draft One, which benefits from the company’s dominant position as the largest provider of body-worn cameras to police departments nationwide.

“We do not fear advances in technology, but we do have legitimate concerns about some of the products on the market now,” wrote Chief Deputy Prosecutor Daniel J. Clark of the King County Prosecuting Attorney’s Office in Washington state. His office, which handles all prosecutions in the Seattle area, has instructed police departments not to use AI for writing reports.

The concerns center on fundamental questions about reliability and accountability in the criminal justice system. Police reports serve as foundational documents that prosecutors use to build cases, district attorneys rely on for charging decisions, and defense lawyers examine when cross-examining arresting officers. Any errors or distortions in these reports can have profound consequences for people’s freedom.

Clark acknowledged a particularly troubling aspect of the technology: “While an officer is required to edit the narrative and assert under penalty of perjury that it is accurate, some of the [genAI] errors are so small that they will be missed in review.”

The situation grows more concerning when examining how these systems handle accountability. According to the Electronic Frontier Foundation’s investigation, Axon designed Draft One in ways that actively prevent transparency. When officers export their finished reports, the system deletes the original AI-generated draft, erasing any record of which portions came from the computer and which from the officer.

During a roundtable discussion about Draft One, an Axon senior principal product manager for generative AI explained this design choice when asked whether it’s possible to see after the fact which parts were suggested by AI versus edited by the officer.

“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices,” the manager said. “So basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party records management system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”

This lack of documentation creates a loophole where officers caught contradicting their own reports on the witness stand could potentially blame the AI for the discrepancy, with no way to verify or disprove their claim.

The opacity also makes it nearly impossible for the public to determine whether their local police are using AI to generate reports, and even harder to audit those reports through public records requests.

Some states have begun responding to these concerns with legislation. Utah passed a law requiring police reports created wholly or partially by generative AI to include a disclaimer stating they contain AI-generated content. Officers must also certify they checked the report for accuracy.

California went further with legislation requiring police to disclose on each report whether AI was used to author it fully or partially. The law also prohibits vendors from distributing or sharing information police agencies provide to the AI system. Perhaps most significantly, it mandates that departments retain the first draft of reports, allowing judges, defense attorneys, and auditors to see which portions were written by officers and which by computers.

The King County prosecutor’s memo expressed hope that the technology might improve: “AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.”

However, digital rights advocates remain skeptical about whether generative AI will be capable of producing reliable police reports even with further development. The fundamental concern remains that documents carrying such weight in determining people’s liberty should not be outsourced to machines that produce unpredictable results and operate without meaningful oversight.

As more states consider regulations or bans on AI-written police reports, the debate highlights broader tensions about deploying emerging technologies in high-stakes contexts before they’re fully understood or properly regulated. Companies market these tools as time-saving solutions for overworked officers, but prosecutors, defense attorneys, and civil liberties organizations argue that efficiency cannot come at the cost of justice system integrity.