A hot potato: Another authority is turning to AI in the hope of being able to predict crimes or problematic behavior before they occur. New York’s Metropolitan Transportation Authority (MTA) plans to use the technology on the city’s subways by analyzing live security footage for potential troublemakers.
MTA Chief Security Officer Michael Kemper said (via The Gothamist) that the agency is working with AI companies to develop the software that can analyze real-time subway platform feeds.
The plan is that if the software identifies someone who appears to be acting unusually, suspiciously, or irrationally, it would trigger an alert that in turn would trigger a response from security or the police department. Kemper said that the police response would come “before waiting for something to happen,” i.e., stopping a crime before it happens, Minority Report-style.
“AI is the future,” Kemper said during a committee meeting. “We’re working with tech companies literally right now and seeing what’s out there right now on the market, what’s feasible, what would work in the subway system.”
MTA spokesperson Aaron Donovan emphasized that the predictive system would not use facial recognition, adding that the AI technology is designed to identify behaviors, not people.
There have been several instances of unprovoked attacks in New York’s subway systems in recent times, including people being pushed onto the tracks. Ten people were murdered in the NYC subways last year, according to police.
Not everyone is happy about the planned use of this technology. “Using artificial intelligence – a technology notoriously unreliable and biased – to monitor our subways and send in police risks exacerbating these disparities and creating new problems,” wrote New York Civil Liberties Union Senior Policy Counsel Justin Harrison in a statement. “Living in a sweeping surveillance state shouldn’t be the price we pay to be safe. Real public safety comes from investing in our communities, not from omnipresent surveillance.”
This is far from the first crime-predicting technology we’ve seen. Earlier this month, it was reported that the UK government is developing a homicide prediction algorithm to identify potential violent offenders.
South Korea is also testing “Dejaview,” an artificial-intelligence platform that scans live CCTV feeds to spot patterns linked to imminent crime and alert authorities before it happens.
Back in 2022, an academic team announced it had created a model capable of forecasting crime up to seven days ahead with roughly 90 percent accuracy. That same year, reports emerged that China was exploring large-scale citizen profiling, aiming to use automated analytics to flag individuals who might become dissidents or lawbreakers before any wrongdoing occurs.
Source link