The Metropolitan Transit Authority (MTA) has quietly rolled out AI surveillance at 7 of New York City’s subway stations to monitor fare dodgers.
The solution relies on AI-powered software designed by a Spanish software company called AWAAIT. The MTA has around 10,000 surveillance cameras throughout its transit system, and the software integrates with these to spot turnstile jumpers.
The software is able to detect fare evaders and then send their photos to the smartphones of station agents. The MTA said that the surveillance system doesn’t report fare evaders to the NYC police but wouldn’t comment on whether this policy would change in the future.
MTA Communications Director Tim Minton said that the MTA was “using it essentially as a counting tool. The objective is to determine how many people are evading the fare and how are they doing it.”
The 7 stations where the AI surveillance system was deployed were not named and the MTA said they planned on implementing the solution in approximately 24 more unnamed subway stations before the end of 2023.
In a recent report, the MTA said it lost $690m to fare evaders throughout its transit system in 2022, with the subway fare dodgers accounting for $285m of this. Having 10,000 cameras trained on the transit system is fine and well but it creates an overwhelming task for human operators to monitor. It seems like the perfect job for an AI tool.
But NYC is already one of the most surveilled cities in the world and Amnesty International have expressed concern over the almost 25,000 cameras used by the NYPD along with facial recognition technology.
The use of AI surveillance in the subway has heightened these privacy concerns. Minton says that the images captured by the software are only stored “for a limited period.” without saying how long that was.
Surveillance Technology Oversight Project (S.T.O.P) Executive Director, Albert Fox Cahn, said “AI can’t fix the unaffordability of transit, but it can creep people out.” and added that “this raises real concerns about how the MTA is tracking New Yorkers and where that data is kept.”
The MTA’s trial run of the AI surveillance actually started in May of this year but only came to light in July in an announcement by S.T.O.P. The civil rights group was able to finally obtain a redacted version of the previously unreported contract under the Freedom of Information Law. It makes you wonder how widespread the unreported use of AI surveillance is in our cities.
The discriminatory impact of camera surveillance is already a sore point for privacy advocates. Artificial intelligence has also had its fair share of criticism of how AI seems to reinforce biases. Combining AI with camera surveillance may save the MTA money and it may even keep us safer, but at what cost?