AI-powered cameras spark privacy concerns as usage grows

June 17, 2024

  • Documents reveal facial recognition pilot programs at eight major UK train stations
  • Facial recognition tech could be used to analyze people's emotions and actions
  • Privacy advocates warn that this infringes on people's civil rights
AI surveillance

A new wave of AI-enhanced surveillance is spreading across the US and UK, as private companies and government agencies deploy AI-powered cameras to analyze crowds, detect potential crimes, and even monitor people’s emotional states in public spaces.

In the UK, rail infrastructure body Network Rail recently tested AI cameras in eight train stations, including major hubs like London’s Waterloo and Euston stations, as well as Manchester Piccadilly. 

Documents obtained by civil liberties group Big Brother Watch reveal the cameras aimed to detect trespassing on tracks, overcrowding on platforms, “antisocial behavior” like skateboarding and smoking, and potential bike theft.

Most concerningly, the AI system, powered by Amazon’s Rekognition software, sought to analyze peoples’ age, gender, and emotions like happiness, sadness, and anger when they passed virtual “tripwires” near ticket barriers. 

The Network Rail report, some of which is redacted, says there was “one camera at each station (generally the gateline camera), where a snapshot was taken every second whenever people were crossing the tripwire and sent for analysis by AWS Rekognition.”

It then says, “Potentially, the customer emotion metric could be used to measure satisfaction,” and “This data could be utilised to maximise advertising and retail revenue. However, this was hard to quantify as NR Properties were never successfully engaged.”

Amazon Rekogniton, a computer vision (CV) machine learning platform from Amazon, can indeed detect emotions. However, this was just a pilot test, and its effectiveness is unclear.

The report says that when using cameras to count people crossing railway gates, “accuracy across gate lines was uniformly poor, averaging approximately 50% to 60% accuracy compared to manual counting,” but this is expected to improve. 

The use of facial recognition technology by law enforcement has also raised concerns. Not long ago, London’s Metropolitan Police used live facial recognition cameras to identify and arrest 17 individuals in the city’s Croydon and Tooting areas. 

The technology compares live camera feeds against a watchlist of persons with outstanding warrants as part of “precision policing.”

In February, the Met used the system to make 42 arrests, though it’s unclear how many led to formal charges. 

“The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step,” commented Jake Hurfurt, head of research at Big Brother Watch.

Your emotions on a database

Critics have vehemently argued that facial recognition threatens civil liberties. 

Parliament members in the UK urged police to reconsider how they deploy the technology after suggesting they could access a database of 45 million passport photos to better train these surveillance models. 

Experts also question facial recognition’s accuracy and legal basis, with Big Brother Watch arguing that the majority (85%+) of UK police facial recognition matches are misidentifications. 

Met Police officials attempted to allay privacy fears, stating that non-matching images are rapidly deleted and that the facial recognition system has been independently audited. 

However, talk is cheap when the misuse of these AI systems truly impact people’s lives. Predictive policing programs in the US have also generally failed to achieve their objectives while causing collateral damage in the form of police harassment and wrongful imprisonments. 

Concerns about bias and inaccuracy in facial recognition systems, especially for people of color, have been a major point of contention. 

Studies have shown the technology can be significantly less accurate for darker-skinned faces, particularly black women.

Policymakers will need to grapple with difficult questions about these powerful tools’ transparency, accountability, and regulation. 

Robust public debate and clear legal frameworks will be critical to ensuring that the benefits of AI in public safety and security are not outweighed by the risks to individual rights and democratic values. 

As technology races ahead, the time for that reckoning may be short.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions