Just recently, the Justice Department announced that they are looking to spend at least $2 million on further researching the crime-fighting capabilities of AI.
It's believed that this technology could eventually help them to fight against human trafficking, child pornography, drug trafficking, and other crimes.
At the moment, when trying to address these issues, investigators have been required to sift through a hefty amount of data on the matter. And the hope is that they will be able to use AI to comb through the details and sort through that data at a much faster pace.
According to the National Institute for Justice, the technology would help to sift through various communications of suspects in order to essentially provide further investigative leads for law enforcement.
Their announcement states that they will be looking to develop technologies that might be able to:
“distinguish a contraband file through it's encrypted container-without breaking encryption-with a sufficient degree of certainty to support a probable cause for a court order to unlock the device, based on the encryption pattern of a particular [type of file].”
It's not only law enforcement in the U.S. that are looking to use AI either. Officers in London are also allegedly looking to use AI to help them to sort through data in just the same way. And they'll be moving the data to cloud providers such as Amazon Web Services and others.
China has also been working on a new police station in Wuhan, that won't have any human officers, but instead it will be powered by AI.
However, despite the enthusiasm to move in this direction, the technology still has some kinks to work out as it still has had difficulty distinguishing between things such as skin and sand color. It's expected that over the next couple of years however, that there will be a great deal of improvement.
There are privacy concerns over the sensitivity of the data and the opportunity for it to be misused. And there is also the opportunity for bias, as researchers from Stanford and MIT have previously pointed out with their findings on a variety of commercial facial-analysis programs. They suggested that these commercial programs might be expressing a bias with regard to gender and skin-type.
Some folks are a lot more concerned than others, those such as Elon Musk and Julian Assange have expressed their own fears about moving in this direction.
Assange previously stated that merging AI systems with national security was a threat to mankind. And Musk, who admits that he has had exposure to some of the most cutting edge AI, has brought up his own concerns for AI safety on multiple occasions and suggested that strict regulation is needed to try and offset any danger.
AI has been increasingly used in the criminal justice system in a growing number of states throughout the US. For example, some judges have been using automated risk-assessment systems to determine custody and pre-trial release decisions. Legal experts have warned however, that this could pose some great risks to due process, civil liberties, and more.
AI software that was recently developed by an Israeli startup known as LawGeex demonstrated results which suggested that the software was much more efficient at analyzing legal documents than very experienced lawyers. It was able to sort through the NDAs (nondisclosure agreements) with more speed and accuracy than at least 20 experienced lawyers.
It's evident that this technology is going to greatly transform the legal profession and the criminal justice system as we know it over time.
Can we expect that law enforcement agencies will find a suitable balance between reaping the benefits of this technology, while still being mindful of privacy concerns and personal security? We'll have to find out as the trend unfolds.
Pics:
Henrik5000/istock
istock via theweek.com/articles/689359/how-humans-lose-control-artificial-intelligence
Futurama via reillytop10.com
Sources:
https://gizmodo.com/justice-department-drops-2-million-to-research-crime-f-1823367404
https://www.cmswire.com/customer-experience/ai-bias-when-algorithms-go-bad/
https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/
https://medium.com/insurge-intelligence/edward-snowden-we-must-seize-the-means-of-communication-to-protect-basic-freedoms-8c01b75eb384
https://www.cnbc.com/2017/12/18/9-mind-blowing-things-elon-musk-said-about-robots-and-ai-in-2017.html
https://futurism.com/chinas-ai-police-station-humans/
http://ohrh.law.ox.ac.uk/why-artificial-intelligence-is-already-a-human-rights-issue/
https://www.forbes.com/sites/haroldstark/2017/07/19/artificial-intelligence-and-the-overwhelming-question-of-human-rights/#6d8f5e1c6c90
https://www.forbes.com/sites/bernardmarr/2017/09/19/how-robots-iot-and-artificial-intelligence-are-transforming-the-police/2/#27274b5e2bde
https://www.theverge.com/2018/1/23/16907238/artificial-intelligence-surveillance-cameras-security
http://www.standard.net/Business/2018/01/31/AI-in-the-court-When-algorithms-rule-on-jail-time
https://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case
https://www.timesofisrael.com/israeli-ai-software-whips-expert-lawyers-in-contract-analysis/
https://www.cnbc.com/2017/02/17/lawyers-could-be-replaced-by-artificial-intelligence.html