Implementing Artificial Intelligence in Policing, Prosecuting, and Punishing
Joseph D. Pool
May 7, 2024
PHI 295: Philosophy of Criminal Justice
Rollins College
Introduction
There you stand in a ruined cityscape. Distant sirens blaring, smoke and flame rising on the horizon, and a mass of shattered glass and melted metal surrounding you. You find yourself pressed against the stone and marble facade of what was once an impressive and imposing skyscraper. Your back is against the wall and you have nowhere to turn to. Ahead you see the object of your dread. The hum of millions of machines grows louder, the thumping of synchronized steps echoes throughout the empty street, as the inexorable, indomitable army of Terminator-esque robots marches towards you. Their primary directive? Ensure peace and security by eliminating all humans. Welcome to the AI-driven end of days.
While this would make for a great chapter in a dystopian Kindle read or a critically acclaimed scene in its Netflix adaptation, the story it tells is one whispered about in the halls of Congress, discussed ad nauseam in the situation rooms of the Pentagon, and hotly debated on the Presidential stage. Beyond policymakers, it has caught the attention of professionals across all disciplines. Philosophers, religious leaders, computer scientists, engineers, sociologists, and scholars alike have spent the last few years on this worry. The question at hand is what will the future of artificial intelligence bring? Will it beget a period of peace, prosperity, and plenty, or will we be met with the droid army of Star Wars and a Skynet-induced end of civilization?
It is undeniable that the terms artificial intelligence and machine learning have risen to meteoric recognition across academic disciplines, age divides, and around the world. In a rush to stay relevant and meet a seemingly endless demand for AI, everyone from McKinsey to Duke University have focused efforts, money, and people into AI-focused consulting, training, and implementation. To the fear of those who read dystopian novels, however, the rush to adopt AI has been supported by city officials and law enforcement agencies across America. Digital surveillance tools, end-to-end message encryption, and intelligent motion sensors have seen an emergence everywhere. But if current trends continue, these are just the tip of the digital iceberg. To understand how the future of the criminal justice system will look, it is crucial to discuss the role that AI will likely play across the policing, prosecuting, and punishing of suspects. More importantly, it is necessary to consider the extent to which limitations should be placed on its use and special qualifications required for its users. This paper will argue that while artificial intelligence proves promising in countering criminal activity, a careless implementation of it across the criminal justice system will be catastrophic.
Officers, Attorneys, and Executioners
One last question that remains in framing this discussion is the idea of criminal justice and the lens of philosophy as opposed to a simple discussion of criminal procedure and law. The issue comes down to ethics, and specifically to normative ones. Procedure and law are often positive—they dictate the current state of affairs and operate on measurables, observables, and data points. Ethics and justice are instead normative—they focus not on “what is” but on “what ought to be.” Whether in determining the moral nature of a person’s behavior or evaluating the actions of a government, it is important to consider the current and real, but it makes a meaningful difference to also account for the potential and ideal. The use of normative ethics in this paper allows for a critique of the current state of affairs and offers a chance to properly think about if artificial intelligence ought to play a role in criminal justice, and to what extent it should be involved. With these ideas squared away and the ethical lens established, it is time to engage in a discussion of our future.
To prepare for the future of our criminal justice system, one must understand its present state and stated purpose. Criminal justice encapsulates three key components: policing, prosecuting, and punishing. In the policing phase, investigations are launched, warrants are issued, and arrests are made. This often takes the form of an active process in which police officers and law enforcement agents make contact with suspects, explain their suspicion, and based on the circumstance, issue anything from a citation to an arrest or shots fired from their service weapon. Following policing, prosecuting is the phase in which the gears of the legal system begin to grind. Following the filing of formal charges, the suspect becomes the accused and then the defendant. While innocence is presumed, the goal of the defendant must be an active one of countering the claims of the prosecuting attorney or risk being found guilty and receiving the punishment. Following a guilty verdict comes the last phase of the criminal justice process: punishment. The defendant is now the convicted and they must go through another trial, the sentencing. Aggravating factors and mitigating factors are discussed, the way in which the crime was committed and the external factors that affected the crime and the convicted individual are considered. For many, this ends in prison. This punishment can range from a misdemeanor with up to 364 days in prison, or a felony punishable by a year to a life behind bars or even the death penalty. The criminal justice system exists across these three phases—policing, prosecuting, and punishing—and relies on the police officers, state attorneys, and sometimes executioners. However, these roles are increasingly being supplemented by artificial intelligence and augmented with the use of both software and hardware. The future is happening now, and happening at the behest of artificial intelligence and its permeation of our critical infrastructure and institutions. To be at the forefront of this frontier, artificial intelligence’s role in criminal justice ought to be considered in terms of its practical applications and ethical implications.
Practical Applications
“AI is awesome!” That is the sentiment shared by many and backed by much data. According to a recent study from the Harvard Business School, “Consultants across the skills distribution benefited significantly from having AI augmentation, with those below the average performance threshold increasing by 43% and those above increasing by 17% compared to their own scores.”[1] It is no wonder that police departments, federal agents, and the government at large are intent on boosting their own productivity and generating a higher success rate for themselves. On paper, AI is promising and with more training and interactions, appears to be getting better and better. Allegedly, it is generating great results for those who use it responsibly and only gets better when fed more data and when given more time to apply it. When it comes to use within the criminal justice process, its benefits can be broken down into practical applications across the three phases of policing, prosecuting, and punishing.
Whether for answering homework questions or policing a major city, AI is skilled in picking out and predicting patterns. Even in its current state, some argue that it is more than capable of this predictive policing in which it can use probability to determine where a crime is likely to be committed[2] and where to station patrols to serve the most people in the most efficient way. In policing, and across all parts of the criminal justice system, artificial intelligence is a mean, lean, efficiency-optimizing machine. According to an analysis by Deloitte, “Machine learning and big data analysis make it possible to navigate through huge amounts of data on crime and terrorism, to identify patterns, correlations and trends … The ultimate goal is to create agile security systems that can detect crime or terrorism networks and suspicious activity, and even contribute to the effectiveness of justice systems.”[3] For departments looking to maximize their use of resources, and taxpayers wanting to see results with the increasing amount they spend on cities and municipalities, AI seems to be the answer. On paper, it makes policing work better and ensures time and money are not wasted on chasing unlikely cases.
The next step of the criminal justice process is another one that has seen dramatic implementation of AI to increase efficiency and reduce the need for people. In the phase of prosecuting, artificial intelligence has an ability to save time and effort by reading through massive amounts of discovery data, to identifying and combing through relevant precedent and case files, and to generating compelling critiques of arguments and alibis in a fraction of the time it takes individuals to do so. While fears of AI replacing lawyers have emerged, those are likely unfounded. In fact, by collaborating with machine learning programs, lawyers can focus more on client interactions, personal communications, and the finishing touches on a case. As further explained by an article in Bloomberg Law, “AI can minimize the number of errors in the research process and help an attorney avoid missing important, relevant documents. Now the extra time previously spent on research, or on outsourcing research, can be better applied to reviewing and acting upon relevant documents and engaging in more strategic work.”[4] The use of artificial intelligence in law even has an endorsement from the American Bar Association. As Avaneesh Marwaha excitedly notes, “these attorneys are more likely to engage in strategic problem-solving, which can enable them to enjoy their work more … Happier, more satisfied attorneys are less likely to take sick days or suffer burnout. They’re also more likely to have adequate time (and patience) to thoroughly counsel their clients and mentor their subordinates.”[5] AI makes being an attorney more impactful and interesting. As it relates to criminal justice, these attorneys can focus on better defending their accused clients or in prosecuting those suspected of a crime. In a system in which criminal defense counsels are often overworked, overbooked, and overwhelmed, the gaining back of time and energy is no small feat. This extra time in crafting an argument could be the difference between an innocent person going to prison and that individual getting a fair shake at justice.
When it comes to policing and prosecuting, there is always human error in determining a person’s guilt. The criminal justice system corrects for this by proposing different minimum standards of guilt based on the phase of processing a person is in.[6] When it comes to arrests and investigations, there is a low “reasonable suspicion or probable cause” standard that allows for officers to detain first and determine later in an effort to prevent crime. When the arrested becomes the accused defendant, the standard is raised. For some crimes, it is about a “preponderance of the evidence” which can be likened to 51% likelihood of guilt. It can also take the form of “clear and convincing evidence” which can be said to be 66% chance of guilt. In cases of the death penalty and the most severe cases, however, the standard is again raised to “beyond a reasonable doubt” or an approximately 99% chance of guilt. These are not perfect estimates, and it is impossible to perfectly ascertain if someone is 100% guilty. But this is where AI’s ability to sift through mass amounts of data and to calculate complex econometric equations and probabilities comes into play. While this has not been fully put into practice, it is possible that artificial intelligence when used in a courtroom alongside a judge and jury and fed the case details, may be able to better calculate likely guilt. As Christopher Malikschmitt of the American Bar Association suggests, “It is hard to imagine a professional task better suited to AI than that of the American judiciary—weighing competing written arguments across vast catalogues of caselaw is already within the ken of our current AI teaser systems and would lend vast efficiencies against ballooning litigation costs.”[7] This assumedly unbiased artificial intelligence agent could account for human error and take the role of presiding judge, freeing up time and energy in criminal justice.
Finally, aided and abetted by artificial intelligence, we reach the stage of sentencing and punishing. To reach this point, artificial intelligence would have helped locate the criminals with predictive policing and aided in a guilty verdict with its unmatched knowledge base and deductive reasoning capabilities. In this final phase, AI can be used to deliver justice without incident or the incursion of immense guilt. Many prisons are overcrowded and the prisoners, due in part to their dire conditions, are prone to violence and incidents emerging. With AI given the reins of institutions, it is possible that these conditions can be reimaged, prisoners can have their days dedicated to more meaningful pursuits, and the same predictive powers that prevent crime can be used to curb incidents between prisoners as well as prisoners and correctional officers.
On the extreme end of punishment and possible only “beyond a reasonable doubt” comes the death penalty. Currently, the death penalty often takes the form of lethal injection and is administered by a person. There is a risk the first injection does not work. There is a further risk that the executioner possesses bias—whether against the death row inmate or in their defense—and thus, can not properly and fairly administer it. With an unfeeling, unbiased machine put behind the wheel—and given the needle—punishing too can be amended.
Ethical Implications
By this point in this paper, AI has been presented as the panacea to all of the problems plaguing the criminal justice system. It uses machine learning capabilities to improve efficiency, efficacy, and effectiveness across policing, prosecuting, and punishing. Without human input, there is no room for human error or implicit bias. This is a seemingly safe and salient solution. Except for that fact that is it not. None of these perfect-on-paper possibilities work as advertised because at the end of the day, artificial intelligence programs are created by, trained by, and managed by people. These people have their own human errors and implicit biases. As UNESCO elaborates, “in no other field is the ethical compass more relevant than in artificial intelligence … The world is set to change at a pace not seen since the deployment of the printing press six centuries ago. AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms.”[8] In its current state, AI is flawed.
These erroneously-encoded biases are not harmless bugs; they threaten the lives and livelihoods of people in AI-patrolled neighborhoods. As explained by Renee Hutchins, the dean of the University of Maryland’s Francis King Carey School of Law, “race-based decision-making has been a part of American policing at least since the Fugitive Slave Acts of the pre-Civil War period.”[9] If AI models are fed mass amounts of data without the necessary corrections or boundaries established by programmers to sort through junk data and false sentences, the AI will further promote dangerous developments and problematic policing patterns. If for example AI goes off of raw data of arrests instead of guilty convictions, police presence may end up in even more inefficient positions than presently. All the purported benefits of efficiency and accuracy would go out the window at the detriment of communities who would see additional policing, annoyances in the form of more traffic stops, and the feeling of pressing government oversight.
Describing the role of these biases in policing and policymaking, Harvard cognitive philosopher Susanna Siegel notes, “culturally prevalent attitudes sometimes operate in the minds of individuals in ways that are typical of beliefs. They contribute to the interpretation of information, they lead to inferences, and they guide action.”[10] With no human accountability and no easy access to training data, these issues persist without correction. Biases built into past and present handling of criminal cases fuel a nebulous neural network in which officers, attorneys, and executioners are blissfully unaware of the problems plaguing their new enforcement apparatus. These reflect two far deeper issues with artificial intelligence: its self-teaching technology and the obfuscated way in which it turns raw data into actionable information.
As backed by Brett Mittelstadt et al., “the uncertainty and opacity of the work being done by algorithms and its impact is also increasingly problematic. Algorithms have traditionally required decision-making rules and weights to be individually defined and programmed ‘by hand’.”[11] To the average person, the inner workings of a neural network and the operating procedures of large language models such as OpenAI’s ChatGPT, Microsoft’s Copilot, or Anthropic’s Claude are a mystery. In fact, these companies have threatened legal action to those who try to uncover the source code and training methodology. If AI is to substitute for officers, attorneys, or executioners, it must do so only when there is data transparency and information governance. Otherwise, the dark and dystopian themes of sentient artificial intelligence-induced doom and gloom may go from the pages of fantasy to the front page of classified briefings. Despite its practical applications, there are far too many ethical implications to allow AI to run its course without hesitation or limitation. In its current form, AI threatens to worsen political and social divides, fuel distrust of different groups and the government, and to fashion itself as a privacy violating, amendment breaching entity of government overreach without any oversight.
Conclusion
This is not the Death Star. There is no thermal exhaust port to strike and thus end the occupation of its oppressive and overbearing overlords. This is the current state of affairs and a glimpse into the not so far-flung future. Without a radical rethinking of technology or an utterly unprecedented uprising against its usage, artificial intelligence is here to stay. It will embed itself within our institutions—in fact it already has begun to do so. When it comes to the policing, prosecuting, and punishing phases of the criminal justice system, it carries grave risk and responsibility but a great capacity for reward. To the normative ethics question of whether we should use artificial intelligence, the answer is a cautionary, qualified and modified yes.
To promote ideas of justice and to protect constitutional guarantees of rights and freedoms against unlawful searches and unreasonable restrictions on individual liberty and privacy, AI must be adequately equipped to understand the complexities of current crises and the social and political issues that underscore them. With a team of just software engineers or people from just one discipline present, these biases will present themselves in dangerous ways and with even less accountability and transparency than in the human-dominated criminal justice system of today. Perhaps with a human team of philosophers and practitioners alike can AI be trained in not just the laws and policies, but their purpose and the primary goals of preserving and promoting human lives. In a way, the arguments for the importance of a well-rounded liberal arts education ring true in discussion of how to train AI. In both cases, a mind is being molded with the guidance of experts and under the idea of improving the world and the systems within. Only when policymakers locate the impending issues hiding behind ones and zeroes will artificial intelligence prove a force for good in criminal justice. Only through a data-driven and human-helped approach will AI positively transform policing, prosecuting, and punishing.
WORKS CITED
“Artificial intelligence for lawyers explained.” Bloomberg Law, accessed May 2024, https://pro.bloomberglaw.com/insights/technology/ai-in-legal-practice-explained/.
Dell’Acqua, Fabrizio, Edward McFowland III, Ethan Mollick, Hila Lifshitz-Assaf, Katherine Kellogg, Saran Rajendran, Lisa Krayer, François Candelon, Karim R. Lakhani. “Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge, worker productivity, and quality.” Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013 (2023). https://dx.doi.org/10.2139/ssrn.4573321.
“Surveillance and predictive policing through AI.” Deloitte, accessed May 2024, https://www.deloitte.com/global/en/Industries/government-public/perspectives/urban-future-with-a-purpose/surveillance-and-predictive-policing-through-ai.html,
Hutchins, Renee. “Racial profiling: The law, the policy, and the practice.” In Policing the Black Man: Arrest, Prosecution, and Imprisonment edited by Angela Davis. New York: Pantheon Books. pp. 95-125, 2017.
“Evidentiary standards and burdens of proof in legal proceedings.” Justia, accessed May 2024, https://www.justia.com/trials-litigation/lawsuits-and-the-court-process/evidentiary-standards-and-burdens-of-proof/.
Malikschmitt, Christopher. “The real future of AI in law: AI judges.” American Bar Association, accessed May 2024, https://www.americanbar.org/groups/law_practice/resources/law-technology-today/2023/the-real-future-of-ai-in-law-ai-judges/.
Marwaha, Avaneesh. “7 ways artificial intelligence can benefit your law firm.” American Bar Association, accessed May 2024, https://www.americanbar.org/news/abanews/publications/youraba/2017/september-2017/7-ways-artificial-intelligence-can-benefit-your-law-firm/.
Mittelstadt, Brett Daniel, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, Luciano Floridi. “The ethics of algorithms: Mapping the debate.” Big Data & Society 3, no. 2, 2016.
Persall, Beth. “Predictive policing: The future of law enforcement?” NIJ Journal, no. 266, 2010.
Siegel, Susanna. “Bias and perception.” In An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind edited by Erin Beeghly & Alex Madva. New York: Routledge. pp. 99-115, 2020.
“Ethics of artificial intelligence.” UNESCO, accessed May 2024, UNESCO, https://www.unesco.org/en/artificial-intelligence/recommendation-ethics.
Dell’Acqua et al., “Navigating the Jagged Technological Frontier.” ↑
Pearsall, “Predictive Policing” ↑
Deloitte, “Surveillance and Predictive Policing Through AI.” ↑
Bloomberg Law, “Artificial Intelligence for Lawyers Explained.” ↑
Marwaha, “7 Ways Artificial Intelligence Can Benefit Your Law Firm.” ↑
Justia, “Evidentiary Standards and Burdens of Proof in Legal Proceedings.” ↑
Malikschmitt, “The Real Future of AI in Law.” ↑
UNESCO, “Ethics of Artificial Intelligence.” ↑
Hutchins, “Racial Profiling,” 96. ↑
Siegel, “Bias and Perception,” 107. ↑
Mittelstadt et al., “The Ethics of Algorithms,” 3. ↑
Read more of my articles here.