The New York Police Department (NYPD) has recently initiated a pilot program using AI to analyze police body-worn camera footage.
Meanwhile, the UK is rolling out facial recognition tech to catch shoplifters.
The NYPD’s initiative, in partnership with Chicago-based tech company Truleo, aims to evaluate officers’ on-the-job professionalism.
Truleo’s crowdfunding website states that the company’s mission is to “improve trust in the police.” They point out that despite the widespread use of body cameras, “trust in the police has not increased… in part because less than 1% of the videos are ever reviewed.”
Truleo’s software processes audio recordings from police body cameras, categorizing an officer’s language into tags like “insult,” “profanity,” “threat,” and also “explanation” or “gratitude.”
The technology then scores an interaction as “professional” or “unprofessional.”
As Truleo mentions in their press release, “Our technology will automatically detect critical events… It will also screen for both professional and unprofessional office language.”
It explains further, “The technology automatically detects critical events such as use-of-force, pursuits, frisking, and non-compliance incidents and screens for both professional and unprofessional officer language to enable supervisor praise or review.”
PBA President Patrick Hendry expressed his reservations about officers’ privacy, stating, “The department needs to discuss this pilot program with us before rolling it out because we have serious questions about its impact on our members’ privacy and the fairness of the disciplinary process.”
Truleo’s technology is supposed to separate the actions of officers and the identities of public members. It’s an interesting yet controversial reversal of AI technology to hold law enforcement to account rather than the public.
UK ramps up public deployment of facial recognition
Facial recognition technologies are finding their way into various aspects of day-to-day society.
In the UK, Home Office officials encouraged independent privacy regulators to promote the rollout of facial recognition technology in shops.
Since, Britain’s Metropolitan Police has launched a pilot to address rampant shoplifting in London, by comparing CCTV images with known criminal photos.
Last month, by analyzing footage from 12 retailers in London, the Metropolitan Police identified 149 known suspects within a matter of days.
The technology, initially provided by the private firm Facewatch, has faced criticism for its potential to breach human rights and its alleged bias against darker-skinned individuals.
Mark Johnson from Big Brother Watch, expressed privacy concerns, arguing, “Government ministers should strive to protect human rights, not cosy up to private companies whose products pose serious threats to civil liberties in the UK.”
Facewatch founder Simon Gordon defended the technology, saying, “We provide each individual business with a service that will reduce crime in their stores and make their staff safer… All this is doing is using new technology to stop it.”
AI facial recognition is focusing on the actions of both criminals and law enforcement. While there are potentials for enhanced security and oversight, privacy and civil rights concerns persist on both sides of the Atlantic.