AI cameras to catch more UK drivers using phones and not wearing seatbelts

September 2, 2024

  • UK police forces are rolling out AI cameras to catch driving offenses
  • The camera systems catch people driving on their phones or without seatbels
  • Privacy groups are concerned about how the data will be processed and used
AI surveillance

Starting September 3rd, Greater Manchester in the UK will become the latest region to deploy AI cameras to automatically detect drivers using mobile phones or not wearing seatbelts. 

The “Heads Up” cameras, developed by Australian company Acusensus, use machine learning algorithms to analyze images of passing vehicles.

The goal is to identify driving offenses at a scale and precision not possible without AI automation. 

Transport for Greater Manchester (TfGM) is confident the project will help reduce dangerous driving practices that contribute to crashes.

“In Greater Manchester, we know that distractions and not wearing seatbelts are key factors in a number of road traffic collisions which have resulted in people being killed or seriously injured,” said Peter Boulton, TfGM’s network director for highways.

Boulton continued, “By utilising this state-of-the-art technology provided by Acusensus, we hope to gain a better understanding of how many drivers break the law in this way, whilst also helping to reduce these dangerous driving practices and make our roads safer for everyone.”

The trial is part of a wider partnership between Acusensus and the UK government’s National Highways agency. 

In addition to Greater Manchester, the AI cameras will be deployed in nine other regions: Durham, Humberside, Staffordshire, West Mercia, Northamptonshire, Wiltshire, Norfolk, Thames Valley, and Sussex.

Rolling out more AI cameras won’t just help police catch and punish problem drivers  – it could also be a cash cow for the government. More tickets means more money in public coffers.

How the cameras work

The Acusensus system captures two images of each passing vehicle: a shallow-angle shot to check for seatbelt compliance and phone use and a wider angle to detect other risky behaviors, like texting. 

The AI software then analyzes the images to identify potential offenses flagged for human review before any penalties are issued.

The driver receives a warning or fine if the human check confirms an offense. If no offense is found, Acusensus says the image is immediately deleted. 

Past pilots proved how effective these camera systems are. Last year, in Devon and Cornwall, an AI camera system revealed 117 instances of mobile phone usage and 180 seat belt violations in just 72 hours.

While the goals of reducing distracted driving and increasing seatbelt use are commendable, the UK’s broadening adoption of AI surveillance technology is generating a privacy backlash.

Privacy groups are concerned about potential misidentification, bias, and misuse of the collected data. The risks aren’t merely academic; they’ve already been exposed by past AI policing failures.

In the US particularly, there are several examples of individuals being wrongly accused by automated policing systems, sometimes even resulting in temporary jail time.

Not long ago, UK train stations deployed AI cameras capable of detecting crimes, gender, and even emotions.

“The rollout and normalisation of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step,” said Jake Hurfurt, head of research at UK civil liberties group Big Brother Watch in response to that project. 

UK police have also ramped up the use of facial recognition to scan crowds for wanted individuals, leading to numerous arrests last year. 

Recent riots and unrest across the country have reinforced the use of such technologies during periods of public disorder.

The worry is, will it end there? Or will emotion-detecting surveillance become part of modern life?

Big Brother Watch argues that live facial recognition is spiraling out of control already.

As AI-powered surveillance becomes a new norm, striking the right balance between public safety benefits and risks to privacy and civil liberties will be exceptionally tough.

It’s hardly comfortable to have AI cameras peer into people’s lives. But neither is sharing the road with people playing on their phones behind the wheel.

If the AI system can demonstrably save lives, that might sway skeptics. However, authorities will need to prove the technology’s effectiveness and ensure rock-solid data protection measures are in place.

Of course, the public should also know exactly how their data is collected, used, and safeguarded.

The trouble is, as these systems become more common, they also become tougher to control. And poor oversight can lead to some pretty dire consequences.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions