In preparation for the 2020 Tokyo Olympics, the Japanese police force will introduce predictive policing using artificial intelligence technology.
More specifically, the Kanagawa Prefectural Police will be the first to use this technology to identify crimes before they even happen.
Officials are looking to secure funding for the project by April 2018. Assuming the funding is available, the police force would be able to test out the predictive policing by 2020.
The AI systems can determine whether behind one person was behind multiple crimes by comparing data related to each crime. Using that data, artificial intelligence can then predict a criminal's next move or locations that could be the next target.
This information is done to possibly prevent crimes and offset the burden on the police force, prefectural government sources told Japan Times.
AI system will have full access to the police force's statistics
The system itself is set to have a "deep learning" based algorithm that, much like Google's current deep learning projects, teaches the computer in real time as it collects more data.
The AI system is set to have full access to the police force's statistics as well as real-time updates on time, place, weather, and geographical conditions relevant to crimes committed.
There's even discussion of letting the AI access social media that mentions specific areas or people who could be suspected to be involved in a crime.
In an interview with Japan Times, Toyoaki Nishida, a professor of information science at Kyoto University's graduate school, said predictive measures only work if the hypothesis is proven correct. However, he noted that there could be more problems with the system than originally anticipated.
For example, the AI could lead to more constant policing of the exact same area due to the algorithm's calculations. He said that residents would have to approve and also be considered as the number one priority.
Palantir Technologies secretly using predictive policing in New Orleans
There are a number of other areas throughout the world looking to start predictive policing. The company Palantir recently got into hot water by secretly using its predictive policing technology in New Orleans, Louisiana. The application drew criticism from both law enforcement officers and those concerned with civil liberties.
"They’re creating a target list, but we’re not going after Al Qaeda in Syria,” said a former law enforcement official in an interview with the Verge. The anonymous official observed Palantir’s work first-hand and spoke to the media outlet under the promise of anonymity.
"Palantir is a great example of an absolutely ridiculous amount of money spent on a tech tool that may have some application," the former official said. "However, it’s not the right tool for local and state law enforcement."
The Palantir partnership started in 2012 and was only recently brought to the public light after the partnership between the City of New Orleans and the company dissolved on February 21, 2018.
In 2016, the American Civil Liberties Union partnered with 16 other groups to make a joint statement against predictive policing after the Los Angeles Police Department and the University of California, Los Angeles attempted to use it.
The ACLU statement called out the overwhelming potential for the AI to consistently police the same areas and people groups prior to a crime even being committed.