12px13px15px17px
Date:29/11/18

British Cops Are Building an AI That Flags People for Crimes That Haven’t Happened Yet

Police in the UK are piloting a project that uses artificial intelligence to determine how likely someone is to commit or be a victim of a serious crime. These include crimes involving a gun or knife, as well as modern slavery, New Scientist reported on Monday. The hope is to use this information to detect potential criminals or victims and intervene with counselors or social services before crimes take place.
 
Dubbed the National Data Analytics Solution (NDAS), the system pulls data from local and national police databases. Ian Donnelly, the police lead on the project, told New Scientist that they have collected over a terabyte of data from these systems already, including logs of committed crimes and about 5 million identifiable people.
 
The system has 1,400 indicators from this data that can help flag someone who may commit a crime, such as how many times someone has committed a crime with assistance as well as how many people in their network have committed crimes. People in the database who are flagged by the system’s algorithm as being prone to violent acts will get a “risk score,” New Scientist reported, which signals their chances of committing a serious crime in the future.
 
The West Midlands Police department is heading the trial project through the end of March 2019, and they expected to have a prototype by that time. There are eight other police departments reportedly involved as well, and the hope is to eventually expand its use to all police departments in the UK.
 
Donnelly told the New Scientist that they don’t plan to arrest anyone before they’ve committed a crime, but that they want to provide counseling to those who the system indicates might need it. He also noted that there have been cuts to police funding recently, so something like NDAS could help streamline and prioritize the process of determining who in their databases most needs intervention.
 
Even if the intentions here are well-meaning, it’s easy to imagine how such a system could have dangerous implications. For starters, there’s a serious invasion of privacy when it comes to intervening with individuals before something traumatizing has even happened. This system effectively is sending mental health professionals to people’s homes because an algorithm suggested that, in the future, there’s a chance they may commit or fall victim to a crime. To enact that type of intervention across an entire country paints a picture of an eerily intrusive future.
 
Aside from the unsettling possibility of Minority Report-like knocks on your door that this system may lead to, there are still a litany of glaring issues with AI-based detection systems. They are not free from bias, and as Andrew Ferguson at the University of the District of Columbia told the New Scientist, number of arrests don’t inherently signal a hot spot for crime, but rather where police officers are sent, and this disproportionately impacts people of color and poor neighborhoods. This also means that the criminal databases the system is pulling from aren’t representative of society as a whole, which in turn means individuals living in heavily policed areas are most at risk of being flagged.





Views: 347

©ictnews.az. All rights reserved.

Facebook Google Favorites.Live BobrDobr Delicious Twitter Propeller Diigo Yahoo Memori MoeMesto






01 May 2024

30 04 2024