The ShotSpotter paper reveals the key role of humans in firearms

CHICAGO — In more than 140 cities across the United States, ShotSpotter’s AI algorithm and tangled network of microphones evaluate hundreds of thousands of sounds a year to determine whether they are gunshots, generating data now used in criminal cases across the country.

But a confidential ShotSpotter document obtained by The Associated Press shows what the company doesn’t always tout about its “precision policing system” — that officers can quickly override and override the algorithm’s determinations, and are given broad discretion to decide whether the sound of gunshots is, fireworks, thunder or something else.

Such cancellations are occurring 10% of the time in the company’s account in 2021, which experts say could introduce subjectivity into increasingly consistent decisions and conflict with one of the reasons for using artificial intelligence in law enforcement tools in the first place – to reduce the role of all – too fallible people.

“I’ve listened to a lot of gunshot recordings, and it’s not easy to do,” said Robert Maher, a leading national authority on gunshot detection at the University of Montana, who reviewed the ShotSpotter paper. “Sometimes it’s obviously a shot. Sometimes it’s just ping, ping, ping. … and you can convince yourself it’s a shot.”

This story is part of the Associated Press’ ongoing “Tracked” series, which explores the power and impact of algorithm-driven decisions on people’s daily lives.

Labeled “WARNING: CONFIDENTIAL,” the 19-page operational document describes how ShotSpotter review center staff should listen to recordings and evaluate the algorithm’s detection of a likely gunshot based on a number of factors that may require evaluation, including whether the sound of a gunshot cadence, whether the audio picture looks like a “Christmas tree turned on its side” and whether there is “100% certainty of shooting in the mind of the reviewer”.

ShotSpotter said in a statement to the AP that the human role is to be a positive check on the algorithm, and the “simple” document reflects the high standards of accuracy that reviewers must meet.

“Our data, based on reviews of millions of incidents, proves that human verification adds value, accuracy and consistency to the verification process that our customers — and many gun victims — depend on,” said Tom Cheatum, the company’s vice president of analytics. and forensic services.

Cheetum added that the company’s expert witnesses have testified in 250 court cases in 22 states and that its “combined accuracy rate of 97% for real-time discovery across all clients” was verified by an analytics firm the company commissioned.

Another part of the document highlights ShotSpotter’s longstanding emphasis on speed and determination, as well as its commitment to classify sounds in less than a minute and alert local police and 911 dispatchers so they can send officers to the scene.

Titled “New York State’s Adoption of Sanity,” it refers to the New York Police Department’s request to ShotSpotter to avoid publishing alerts about sounds as “possible gunfire” — only the definitive classification of both gunfire and non-gunfire.

“The end result: it trains the reviewer to be decisive and accurate in its classification and tries to remove a questionable publication,” the paper says.

Experts say that such time-limited guidelines could encourage ShotSpotter reviewers to make the mistake of misclassifying the sound as gunshots even when some of the evidence is insufficient, potentially increasing the number of false positives.

“You don’t give your people a lot of time,” said Geoffrey Morrison, a British voice-recognition scientist who specializes in forensics. “And when people are under a lot of pressure, the chances of making mistakes are higher.”

ShotSpotter reports that it issued 291,726 gunfire alerts to clients in 2021. That same year, in comments to the AP attached to a previous story, ShotSpotter said that more than 90% of the time its reviewers agreed with the machine’s classification, but that the company had invested in its team of reviewers “for the 10% of the time they disagree with the machine.” . ShotSpotter did not respond to questions about whether that ratio holds.

A record of ShotSpotter’s operations, which the company claimed in court as a trade secret for more than a year, was recently released from a protective order in a Chicago lawsuit in which police and prosecutors used ShotSpotter data as evidence in the 2020 murder charges against a Chicago grandfather. for allegedly shooting a man in his car. Michael Williams spent nearly a year in prison before a judge dismissed the case due to insufficient evidence.

Evidence at Williams’ preliminary hearings showed that ShotSpotter’s algorithm initially classified the noise picked up by the microphone as a firecracker, making that determination with 98% confidence. But reviewer ShotSpotter, who appreciated the sound, quickly relabeled it as a gunshot.

The Cook County public defender’s office says the operations document was the only documentation ShotSpotter submitted in response to numerous subpoenas for instructions, guidelines or other scientific records. The public company has long resisted calls to open its operations to independent scientific scrutiny.

California-based ShotSpotter acknowledged to the AP that it has other “comprehensive training and operational materials” but considers them “confidential and trade secrets.”

ShotSpotter installed its first sensors in Redwood City, Calif., in 1996 and for years relied solely on local 911 dispatchers and police to investigate each potential shooting until it added its own spotters in 2011.

Paul Green, a ShotSpotter employee who frequently testifies about the system, explained at an evidentiary hearing in 2013 that reviewers were looking at problems with the system, which “occasionally gives false positives” because “it doesn’t listen.”

“Classification is the most difficult element of the process,” Green said during the hearing. “Simply because we have no … control over the environment in which the shots are heard.”

Green added that the company likes to hire ex-military and police officers familiar with firearms, as well as musicians, because they “tend to have a more developed ear.” Their training includes listening to hundreds of audio recordings of shooting and even visiting shooting ranges to familiarize themselves with the explosive characteristics of weapons.

As cities weighed the system’s promise against its price tag, which could be as high as $95,000 per square mile per year, company officials detailed how its acoustic sensors on utility poles and light poles pick up loud pops, rumbles or bangs and then filter them out. sounds through an algorithm that automatically determines whether it is gunfire or something else.

But until now, little was known about the next step: how ShotSpotter reviewers in Washington, D.C., and the San Francisco Bay Area decide what constitutes a gunshot versus any other noise, 24 hours a day.

“Listening to the audio download is very important,” says the paper, written by David Valdez, a former police officer and now retired manager of one of ShotSpotter’s review centers. “Sometimes the audio is so compelling to the shooter that they can override all other features.”

One part of the decision-making that has changed since the paper was written in 2021 is whether reviewers can consider that the algorithm had “high confidence” that the sound was a gunshot. ShotSpotter said that in June 2022, the company stopped showing the algorithm’s confidence rating to reviewers “to prioritize other elements that correlate more closely with an accurate human score.”

ShotSpotter CEO Ralph Clark said the system’s machine classification is improved by “real-world human feedback.”

However, a recent study found that people tend to overestimate their ability to recognize sounds.

A 2022 study published in the peer-reviewed journal Forensic Science International looked at how well listeners identify voices compared to voice recognition tools. All human listeners were found to perform worse than the voice system alone, and say the results should lead to the elimination of human listeners in court cases if possible.

“Will this be the case with ShotSpotter? Will the ShotSpotter system with a reviewer outperform one system alone?” asked Morrison, who was one of seven researchers conducting the study.

“I don’t know. But ShotSpotter has to do an inspection to demonstrate that.”

Source link