An increasing number of law enforcement departments are reportedly turning to artificial intelligence programs to monitor officers’ interactions with the public. According to multiple sources, police departments are specifically enlisting Truleo, a Chicago-based company which offers AI natural language processing for audio transcription logs ripped from already controversial body camera recordings. The partnership raises concerns regarding data privacy and surveillance, as well as efficacy and bias issues that come with AI automation.
Founded in 2019 through a partnership with FBI National Academy Associates, Inc., Truleo now possesses a growing client list that already includes departments in California, Alabama, Pennsylvania, and Florida. Seattle’s police department just re-upped on a two-year contract with the company. Police in Aurora, Colorado—currently under a state attorney general consent decree regarding racial bias and excessive use of force—are also in line for the software, which reportedly costs roughly $50 per officer, per month.
Truleo’s website says it “leverages” proprietary natural language processing (NLP) software to analyze, flag, and categorize transcripts of police officers’ interactions with citizens in the hopes of improving professionalism and efficacy. Transcript logs are classified based on certain parameters, and presented to customers via detailed reports to use as they deem appropriate. For example, Aurora’s police chief, Art Avecedo, said in a separate interview posted on Truleo’s website that the service can “identify patterns of conduct early on—to provide counseling and training, and the opportunity to intervene [in unprofessional behavior] far earlier than [they’ve] traditionally been able to.”
Speaking to PopSci over the phone, Anthony Tassone, Truleo’s co-founder and CEO, stressed Truleo software “relies on computers’ GPU” and is only installed within a police department’s cloud environment. “We don’t have logins or access to that information,” he says. Truleo’s sole intent, he says, is to provide textual analysis tools for police departments to analyze and assess their officers.
The company website offers example transcripts with AI-determined descriptions such as “formality,” “explanation,” “directed profanity,” and “threat.” The language detection skills also appear to identify actions such as pursuits, arrests, or medical attention requests. Examples of the program’s other classifications include “May I please see your license and registration?” (good) and “If you move from here I will break your legs” (bad).
When asked about civilians’ rights to opt-out of this new form of employee development, however, Tassone cautions he would only be “speculating or guessing” regarding their options.
“I mean, I’m not a lawyer,” stresses Tassone when asked about civilians’ rights regarding opt-outs. “These questions are more for district attorneys, maybe police union attorneys. Once this information is captured on body camera data, you’re asking the question of really, ‘Who does it belong to?’”
“Can civilians call [local departments] and ask to be removed? I don’t know,” he adds.
PopSci reached out to Alameda and Aurora law enforcement representatives for comment, and will update this post accordingly.
Michael Zimmer, associate professor and vice-chair of Marquette University’s Department of Computer Sciences, as well as the Director of the Center for Data, Ethics, and Society, urges caution in using the tech via an email PopSci.
“While I recognize the good intentions of this application of AI to bodycam footage… I fear this could be fraught with bias in how such algorithms have been modeled and trained,” he says.
Zimmer questions exactly how “good” versus “problematic” interactions are defined, as well as who defines them. Given the prevalence of stressful, if not confrontational, civilian interactions with police, Zimmer takes issue with AI determining problematic officer behavior “based solely on bodycam audio interactions,” calling it “yet another case of the normalization of ubiquitous surveillance.”
Truleo’s website states any analyzed audio is first isolated from uploaded body cam footage through an end-to-end encrypted Criminal Justice Information Services (CJIS) compliant data transfer process. Established by the FBI in 1992, CJIS compliance guidelines are meant to ensure governmental law enforcement and vendors like Truleo protect individuals’ civil liberties, such as those concerning privacy and safety, while storing and processing their digital data. It’s important to note, however, that “compliance” is not a “certification.” CJIS compliance is assessed solely via the considerations of a company like Truleo along its law enforcement agency clients. There is no centralized authorization entity to award any kind of legally binding certification.
Regardless, Tassone explains the very nature of Truleo’s product bars its employees from ever accessing confidential information. “After we process [bodycam] audio, there is no derivative of data. No court can compel us to give anything, because we don’t keep anything,” says Tassone. “It’s digital exhaust—it’s ‘computer memory,’ and it’s gone.”
Truleo’s technology also only analyzes bodycam data it is voluntarily offered—what is processed remains the sole discretion of police chiefs, sergeants, and other Truleo customers. But Axios notes, the vast majority of body cam footage goes unreviewed unless there’s a civilian complaint or external public pressure, such as was the case in the death of Tyre Nichols. Even then, footage can remain difficult to acquire—see the years’ long struggle surrounding Joseph Pettaway’s death in Montgomery, Alabama.
Meanwhile, it remains unclear what, if any, recourse is available to civilians uncomfortable at the thought of their interactions with authorities being transcribed for AI textual analysis. Tassone tells PopSci he has no problem if a handful of people request their data be excluded from local departments’ projects, as it likely won’t affect Truleo’s “overall anonymous aggregate scores.”
“We’re looking at thousands of interactions of an officer over a one year period of time,” he offers as an average request. “So if one civilian [doesn’t] want their data analyzed to decide whether or not they were compliant, or whether they were upset or not,” he pauses. “Again, it really comes down to: The AI says, ‘Was this civilian complaint during the call?’ ‘Yes’ or ‘No.’ ‘Was this civilian upset?’ ‘Yes’ or ‘No.’ That’s it.”