Once best known for developing the Taser, Axon has transformed into a $50 billion military and law enforcement tech giant.Mother Jones illustration; Michael Nigro/Pacific Press/Zuma; Arthur Ogleznev/Unsplash; Logan Weaver/Unsplash
In April 2024, the American police tech firm Axon, which leads the market for police body cameras, released a tool it billed as “revolutionary”: Draft One, an AI-powered software package that would turn body camera footage and audio into intelligible police reports.
Once best known for developing the Taser, Axon has transformed into a $50 billion military and law enforcement tech giant, providing more than 5,000 police departments across the country with a suite of cloud-based products to manage evidence collection and storage. Draft One, the AI tool, connects with the company’s body cameras and evidence storage service to write police reports with little human intervention. At least 21 departments have experimented with the software.
The use of artificial intelligence in generating police reports has been particularly troubling, according to civil rights advocacy groups like the Electronic Frontier Foundation and ACLU, because of generative AI’s propensity towards racial and gender bias, and its tendency to insert inaccuracies into texts—including wholesale inventions known by technologists as “hallucinations.”
“I can almost guarantee [AI] reports have been used in plea deals,” a police captain wrote.
Axon says its AI system, a custom variant of OpenAI’s ChatGPT, has been fine-tuned to reduce hallucinations. The company also says it’s introduced other safeguards, including one that deliberately inserts “obvious errors” into draft reports, and another that requires a minimum level of human editing before a report can be marked complete—features designed to ensure that officers are actually reading through and confirming the accuracy of reports rather than rubber-stamping them.
But records obtained by Mother Jones through freedom of information laws almost uniformly show police departments that use the software turning such features off. In practice, police using Axon’s product have reduced or eliminated human oversight, deactivating safeguards meant to prevent AI bias while making it difficult or impossible to audit which reports were generated by AI.
Last year, police in Lafayette, Indiana, became among the first to pilot Axon’s Draft One software. Emails obtained by Mother Jones through a public records request to the Lafayette Police Department—one of the state’s largest—show that soon after releasing Draft One, Axon informed the department about a new feature: by default, reports would include a header or footer acknowledging that they were written by AI. “We made this decision in the spirit of transparency,” an Axon spokesperson wrote to Captain Brian Gossard, a senior police official, on May 28, 2024. “You can easily turn this off.”
The department’s settings page shows that police officials did just that—making it impossible to independently review which of its reports were written by AI. (According to the Electronic Frontier Foundation, Axon’s software also doesn’t make it possible to track which portions of a report were written by AI.)
And although the department had now obscured which reports were generated by AI, another set of emails between Gossard and Axon representative Noah Spitzer-Williams suggests that reports generated by the AI tool were used in plea deals in the state.
“I am not personally aware of any [Draft One] reports going through a live court setting,” Gossard wrote to Spitzer-Williams in July 2024. “However, I can almost guarantee reports have been used in plea deals.”
But in response to a follow-up Mother Jones request to identify AI-generated reports used in plea deals, the department said it was “unable to locate any list of cases.”
“This is a willful choice by the police department—an omission that has downstream consequences,” says Andrew Ferguson, a professor of law at George Washington University and author of The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. “There are judges, prosecutors, defense lawyers, and defendants relying on this document, assuming it comes from the sworn memory of the police officer.”
The Lafayette Police Department wasn’t alone in deactivating safeguards. Documents obtained from another department piloting the Axon software, in Fort Collins, Colorado, shows police similarly disabling footers acknowledging that the reports were produced using AI.
Ferguson calls such decisions an “unjustified risk.”
“Axon has said this is a new technology, it’s still untested. We want to make sure there are no errors,” Ferguson says. “One way to do that is to be transparent with the public about when it’s being used. And turning that warning off is essentially ignoring a real risk you’ve identified.”
Those risks, he warns, include the possibility that AI may cause officers to perjure themselves. “Let’s say the AI hallucinates—which happens on occasion—then there’s essentially a lie put forth to the court that can’t be easily revealed.”
Studies have warned that generative AI tools demonstrate bias against both women and non-white people, alongside their propensity to hallucinate—a warning echoed in a 2024 ACLU white paper that opposes software like Axon’s, citing concerns that those biases may exacerbate existing inequalities in policing.
“Biases in AI models could result in discriminatory outcomes, eroding trust among customers and communities we serve.”
In response to a request for comment, an Axon spokesperson said that the company’s development practices were “grounded in a set of guiding principles in key areas, including AI, to ensure that everything we do serves as a force for good.”
But the company also acknowledges the risks of its AI product in an annual SEC filing.
”Unexpected failures or inaccuracies in AI-driven systems could expose our customers to operational risks, particularly in high-stakes use cases such as law enforcement or public safety,” Axon’s most recent annual disclosure reads. “The development, adoption, integration and use of generative AI technology remains in the early stages and consequently, our AI technology may contain material defects or errors.”
The filings also acknowledged the ACLU’s concerns. ”AI algorithms that we use may be flawed or may be (or perceived to be) based on datasets that are biased or insufficient, a risk raised by the ACLU…Biases in AI models could result in discriminatory outcomes, eroding trust among customers and communities we serve.“
Publicly, Axon says it maintains the safety of its AI product by including settings that compel officers to review the information generated by the software. And while other departments did include headers and footers to identify reports written by Draft One, six of the seven police departments that responded to Mother Jones’ public records requests had turned off features that required officers to review AI-generated reports. Only one police department that responded to requests, in South Jordan, Utah, kept features requiring officer input turned on. But screenshots of the department’s settings page also showed that, while it required officers to make a minimal number of changes to AI-generated drafts, the software also allowed them to bypass that feature.
Meanwhile, Draft One is being used to generate police reports on a wide range of alleged crimes, including potential felonies; while the software allows departments to disable its use for certain types of crimes, nearly every police department responding to records requests opted to not use that feature. A spreadsheet obtained from the South Jordan Police Department shows that Axon’s software was used to generate more than 900 reports, for everything from welfare checks to kidnapping and assault cases, between September 2024 and April of this year.
Another log obtained from the Fresno Police Department in California showed that department using the software for more than 3,000 incidents between December 2024 and April 2025. “Our use and evaluation of the product has been going well, and we are taking steps to expand the use of the tool to include additional incident types in the near future in coordination with our District Attorney’s Office,” Fresno police replied to a request for comment.
In March, Utah’s state legislature passed a bill mandating that police departments disclose any use of artificial intelligence, the first such bill to enter law in any state. California’s state Assembly is considering a similar bill, and Seattle’s police watchdog has urged a local ordinance regulating departmental AI use.
The Utah bill’s sponsor, Democratic state Sen. Stephanie Pitcher, is a former criminal defense attorney concerned about AI’s potential to disrupt or derail trials. Most defense attorneys, Pitcher says, won’t even know to ask whether the reports they’re receiving were produced by AI.
“Transparency is so important,” Pitcher says. “If it isn’t clear where this information is coming from—whether it was generated by AI, using cameras and generating reports, or whether it was observations by the officers, it really just complicates things when you need to have a witness in court to answer questions.”
If departments aren’t proactively tracking which reports were written by AI, Pitcher says, that can have downstream consequences for citizens accused of crimes.
“It’s easy to get that information from a police agency, but if you’re going to have to subpoena a tech company directly it has to go through their legal department. I’m sure you can get it but it’s going to take some time,” she adds. “If you’ve got an individual who’s in custody, sometimes they’re in custody until the case resolves. Those types of delays materially attack the individual being held.”
Caroline Sinders, the founder of Convocation Research and Design, a nonpartisan think tank studying cybersecurity and human rights, calls it troubling that police departments are defaulting to turning off features designed to provide accountability and help verify AI-generated content—and that it’s even possible to do so. “Design is deeply political,” Sinders says. “Why have these settings and make them optional when dealing with something as important as case generation?”
Some preliminary studies also show that Axon may have overestimated AI’s ability to save officers time. In Alaska, the Anchorage Police Department said it ended its trial of Draft One because it didn’t produce any significant time savings. A study of officers in New Hampshire found similar results.
But Draft One’s true Achilles heel may be its expense, along with users’ limited enthusiasm. The software can cost departments tens of thousands of dollars annually to operate—and while the company has heavily promoted the software’s supposed ability to make policing more efficient, emails show that officers may not always be on board.
Although the Lafayette Police Department was a significant early adopter of Axon’s software—with Brian Gossard, the departmental captain cited in departmental emails, acting as a spokesperson for the new tool—a September 2024 email from Axon to Gossard acknowledged “adoption challenges.” Less than a quarter of eligible officers, Axon complained, were using the software.
“AI models,” the Axon staffer wrote in a separate email, “are very expensive for us to operate.”