AI is revolutionizing the world. Businesses and organizations, as well as private individuals, are already using AI for a wide array of uses.
So it begs the question: Can the government use AI too? Should they?
Criminal justice systems around the world are already exploring ways to utilize AI’s efficiency, consistency, and cost-saving powers.
To be more specific, the National Institute of Justice (NIJ) currently supports AI research in the following fields: video and image analysis, DNA analysis, gunshot detection, and crime forecasting.
But with AI’s great power also comes complex problems about fairness, accountability, and privacy. Governments around the world need to grapple with these issues.
In this article, we’ll delve into the ways AI is or can be used in the justice system—and what problems they may present.
Applications of AI in Criminal Justice
The criminal justice system has been using AI in several ways.
1. Data Analysis
Manually sifting through massive datasets of phone records, social media activity, and surveillance footage will take countless hours. But AI can do so in far, far less time—and generate insights from them in the process.
AI’s robust pattern-identifying abilities can do things like track movements, recognize faces, or even detect and analyze gunshots from video or audio files. This makes it the process of identifying suspects and corroborating evidence more efficient.
But AI is actually already being used to not just investigate suspects but also keep police officers in check. The LAPD once used AI to analyze bodycam footage of policemen. It observed their body language and word choice during traffic stops as part of ensuring that police members do not abuse their power.
2. DNA Analysis
DNA testing has been cornerstone of criminal investigations. However, manual testing is slow and oftentimes imperfect. For example, the DNA of multiple people can be found on victims—even those not involved in the crime.
However, scientific institutions are now looking into ways of using AI to deconvolute different DNA strands, allowing investigators to arrive at leads more precisely.
3. Predictive Policing
Predictive policing uses AI algorithms to analyze historical crime data to predict where crimes are likely to occur. The police departments of cities like Chicago and Los Angeles have already implemented these systems to allocate resources more efficiently.
By being more aware of high-risk areas, police departments can be better prepared to respond more quickly if and should crime arise in these places.
4. Risk Assessment Tools
AI is now also being used in risk assessment tools to help inform bail decisions, parole evaluations, and sentencing recommendations. These tools take a look at factors such as criminal history, age, and employment status to estimate the likelihood of someone reoffending.
5. Administrative Tasks
More and more police departments are now using AI for their administrative tasks—just like how businesses or private individuals do. One of the most prominent ways this is done is using AI to write police reports based from the officer’s bodycam footage.
Challenges with AI Use for Criminal Justice
While AI can boost the criminal justice system’s overall eficiency, there are several legal and ethical challenges to consider.
1. Bias and Discrimination
Prominents might say that one big advantage of AI is that it’s completely objective. It’s not burdened by emotion or human bias.
However, AI systems learn from historical data—which have been compiled by human beings and therefore does reflect human biases. It’s hard to deny that racial and socioeconomic biases underly such historical data, and so it’s sensible to think that AI just might replicate thjem.
For example, AI-powered facial recognition systems might be more inclined to implicate people with darker skin tones. Or predictive policing systems (as discussed above) just might perpetuate the cycle of racial profiling and over-policing in poorer neighborhoods.
And so, instead of being objective and eliminating bias, AI just might worsen them.
2. Transparency and Accountability
AI algorithms often function as “black boxes,” meaning that their decision-making processes are opaque to users and stakeholders.
This lack of transparency makes it difficult to challenge or audit decisions, particularly when it comes to high-stakes scenarios such as sentencing.
And if it comes to a point where justice systems are overly-reliant on AI, it will be difficult to hold accountable those in power if they can simply say that it was AI who made such decisions.
And so to ensure accountability, final decisions should still be made by human beings and there must be greater clarity in how algorithms operate.
3. Erosion of Privacy
AI cannot function without analyzing people’s data. And as times goes on, the criminal justice system must constantly refine and update its algorithm and data sets to improve its AI.
However, that will require constantly obtaining and using people’s data.
With more and more people asking things like “What VPN should I use?” or looking into other cybersecurity measures, the populace may be disgruntled with how AI taps into their data.
After all, things such as facial recognition, license plate readers, and social media monitoring can infringe on individuals’ rights to privacy and freedom of expression.
How Can We Properly Use AI for Criminal Justice?
Unfortunately, there is no easy answer to this. It’s something that we must continuously grapple with as we go along and experiment.
AI holds great promise. It can make our criminal justice systems far more efficient and accurate. However, it can also lead to the opposite and simply perpetuate pre-existing problems like racial profiling.
As a starting point, more robust legal standards must be established. There must be clear legal guidelines for police officers and persecutors about AI use to address issues like accountability and transparency.
Any AI systems must also be implemented with transparency in mind. Those who use AI should be aware of how its algorithms work—after all, as a basic rule, we should not be using something if we do not know how it works.
Cybersecurity should also be a top priority. With AI constantly analyzing people’s information, these datasets will probably be hot targets for cybercriminals.
Overall, we need to be careful. People’s lives—and the very pillars of justice—are at stake. As always with any kind of AI use, there must always, always be comprehensive human oversight to any process involving it.