London Underground deploys AI surveillance experiment

February 9, 2024

The London Underground employed AI surveillance technology in a year-long trial. 

Spanning from October 2022 to September 2023, Transport for London (TfL) tested 11 distinct algorithms at the Willesden Green Tube station, located in the northwest sector of the city. 

According to detailed documents obtained by WIRED, the trial involved monitoring thousands of passengers’ movements, behaviors, and body language to identify potential criminal activities and safety hazards. 

The AI software, linked with live CCTV footage (which is the computer vision (CV) branch of machine learning), was trained to identify aggressive behaviors, weapons, fare evasion, and accidents, such as people potentially falling onto the tracks.

The UK police have previously experimented with AI surveillance and continue to do so at some public events, as they did at a Beyonce concert last year.

However, it’s often been ineffective, and human rights groups have criticized the technology, calling it a troublesome invasion of privacy and source of prejudice and discrimination.

AI video tech has a problematic history, with numerous projects worldwide underdelivering while sometimes associating darker-skinned people with crimes they didn’t commit. 

Throughout TfL’s trial period, some 44,000 alerts were generated, of which approximately 19,000 were relayed directly to staff for intervention.

Officers participated in tests by brandishing weapons like machetes and guns within the CCTV’s field of view (albeit during times when the station was closed), aiming to better train the AI.

Here’s the complete list of results:

  1. Total alerts: Over 44,000 alerts were issued by the AI system.
  2. Real-time alerts to station staff: 19,000 were delivered in real-time to station staff for immediate action.
  3. Fare evasion alerts: The AI system generated 26,000 alerts related to fare evasion activities.
  4. Wheelchair alerts: There were 59 alerts concerning wheelchair users at the station, which lacks proper wheelchair access facilities.
  5. Safety line alerts: Nearly 2,200 were issued for individuals crossing the yellow safety lines on platforms.
  6. Platform edge alerts: The system generated 39 alerts for people leaning over the edge of the train platforms.
  7. Extended bench sitting alerts: Almost 2,000 alerts were for individuals sitting on benches for prolonged periods, which could indicate various concerns, including passenger well-being or security risks.
  8. Aggressive behavior alerts: There were 66 alerts related to aggressive behavior, although the AI system struggled to detect such incidents reliably due to insufficient training data.

However, the AI system didn’t perform well in some scenarios, leading to erroneous results, such as flagging children passing through ticket barriers as potential fare evasion.

According to TfL, the ultimate goal is to foster a safer, more efficient tube that protects both the public and staff. 

AI surveillance technology isn’t intrinsically awful when used for public safety, but once the tech is there, keeping it under control is a tricky endeavor.

There is already evidence of AI misuse in the UK’s public sector, and scandals in other countries indicate this is a slippery slope when not handled with the utmost care.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions