Tags
TeenInsider-currentevents-icon
Politics & Current Events
stress-icon
Entertainment
TeenInsider-money-icon
Money
TeenInsider-brainfood-icon
Brain Food
academics-icon
Academics
health-wellness
Health & Wellness
stress-icon
Inspiration
inspiration-icon
Advice & Opinions
plus-icon
Everything Else

Artificial Intelligence in America’s Criminal Justice System

(Author’s Note: Below is an Op-Ed I wrote for a school project. It’s about an issue many people don’t know about, and so I wanted to share it with the world.)

Imagine a world where artificial intelligence dictates what you should wear, what you should eat for breakfast in the morning, and whether you should ask for that pay raise. Imagine a world where artificial intelligence is used to determine criminal sentences.

What if I told you that world has arrived?

Welcome to the world of risk assessment algorithms.

Risk-assessment algorithms are self-learning computer programs that use aggregate data to calculate a defendant’s risk of recidivism, or committing a crime again after being released from prison. Questionnaires ask everything from the defendant’s employment status to familial drug use, and the answers are run through an algorithm that spits out a number between 1 (low-risk) and 10 (high-risk). A high score can lead to pretrial detainment and harsh sentences, while a low score does just the opposite.

Although risk scores might not be the only determining factor in sentencing, judges largely take them into account. However, scores are often misinterpreted. In fact, Napa County Superior Court Judge Mark Boessenecker cautions that risk-assessment scores are not always accurate indicators of danger, “A guy who has molested a small child every day for a year could still come out as a low risk because he probably has a job,” Boessenecker said. “Meanwhile, a drunk guy will look high risk because he’s homeless.”

It goes without saying that these algorithms are incredibly problematic. Not only do they reduce the entire experience of a human being to a number between one and ten, but they are racially biased and hidden from the public eye. Yet, these algorithms are currently implemented in more than half of U.S. states.

One particular algorithm called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) has recently come into question regarding discrimination after a report by ProPublica and a 2016 court case, Wisconsin V. Loomis. COMPAS continues to be used in several states, including New Mexico, Michigan, and Florida.

ProPublica’s 2016 statistical analysis found that COMPAS mistakenly labels minority convicts as high risk at twice the rate of their white counterparts. Even when controlling for a myriad of other factors (such as age, gender, and prior crimes) black defendants were 77 percent more likely to be labeled as high risk.

White defendants are assigned, on average, significantly lower risk scores than black defendants. The graphs from ProPublica’s study highlight this discrepancy.

Risk Scores of Black Defendants

Risk Scores of White Defendants

(Source: ProPublica analysis of COMPAS data from Broward County, Fla.)

It may surprise you that COMPAS’s questionnaire does not explicitly ask for race. So how could a computer program be “racist”?

The problem is in the data. For years, minorities and people of low-income have been thrown wrongfully behind bars at disproportionate rates. Much of this is due to over-policing in certain neighborhoods, high stop-and-frisk rates, and lack of affordable legal resources. Consequently, most datasets suggest that people of color or those living in low-income neighborhoods commit more crimes than they actually do.

Actually, this isn’t the first time artificial intelligence has revealed society’s discriminatory biases. A 2015 Google Image recognition program categorized the faces of many images of black people as gorillas. It didn’t take long after launch for a Microsoft chatbot that learned from millions of Tweets to start propagating racist and anti-semitic tweets.

What’s worse is that algorithms like COMPAS are self-learning. Every time data is put into the system, it changes its algorithm. As a result, already embedded biases are escalated and perpetrated to feed a vicious cycle of discrimination.

For example, not long ago, a program called PredPol created a feedback loop that resulted in the over-policing of marginalized communities. The program would point to certain crime “hot spots” for police to investigate, based on data already gathered about previous arrests. PredPol pointed to majority black neighborhoods at twice the rate of white ones (despite the fact that statisticians found drug use and crime rates to be far more evenly distributed.) As police followed the algorithm and made more arrests in these neighborhoods, the program would adjust to increase the number of “hot spots” in the same area, leading to even more arrests. The result? Flagrant over-policing in majority POC neighborhoods and lack of policing in white ones.

Humanity expected artificial intelligence to be the antidote to the innate biases that plague us all. However, it seems the creation fails to rise above its creator.

It doesn’t stop there. Quite possibly the most terrifying thing about risk-assessment algorithms is their absolute lack of transparency. Even if the questionnaires are sometimes public, all of the calculations behind risk scores still remain a mystery.

Sandra Wachter, a law scholar at Alan Turing Institute at Oxford University, confirms that there are several legal loopholes in the United States that allow companies to keep their algorithms away from public scrutiny–or, quite frankly, any kind of scrutiny. Legislatures simply cannot keep up with such rapidly expanding technology. More than half of U.S. states have not tested their algorithms whatsoever for validity.

Even worse, many of these algorithms are not created by the government; they are created by for-profit corporations. The creator of COMPAS, for example, is a corporation called Northpointe.

For-profit corporations have little incentive to spend extra time and money assessing the fairness of their algorithms–especially if no one is holding them accountable. Because Northpointe considers their algorithms to be private property, the government is nearly powerless in forcing their transparency.

Judges aren’t happy with the secrecy either. Wisconsin Judge Shirley Abrahamson expressed that in the Loomis case, “this court’s lack of understanding of COMPAS was a significant problem…the court repeatedly questioned both the State’s and defendant’s counsel about how COMPAS works. Few answers were available.”

Despite all of their faults, there are, indeed, reasons why these algorithms exist. It would be beneficial to society if we could accurately sort out low-risk offenders from high-risk offenders. A distinction between these two groups would allow low-risk offenders to be with their families pretrial (decreasing their risk of committing a crime again) while protecting society from high-risk offenders (and making sure they show up in court).  One credible study done by Cornell University found that many risk assessment algorithms are routinely more accurate than a judge’s discretion. While it is possible risk assessment algorithms may have the power to increase utility in a society, that does not make up for the fact that risk-assessment algorithms punish those who have already been put on an unequal playing field.

According to these tests, simply being poor or having been born to absent parents can increase your “risk” of recidivism. A continuation of such a practice only seeks to augment existing racial and socioeconomic disparities. Most would agree that advancements in technology, such as AI, should bring the marginalized forward, not push them back.

As Scott Roberts, senior criminal justice campaign director at Color of Change, illustrates, “[We] need policies and practices that reverse mass incarceration, not ones that reinforce the racism already painfully present in the system. Until we address the inherent racism in our justice system… technological ‘solutions’ like risk assessments will continue to fall short.”

Artificial intelligence should be used to mend–not to magnify–problems like the racial crime disparity. Technology has the power to change our world for the better, but until we begin holding its creators and our lawmakers accountable, artificial intelligence will continue to aid the powerful and take from the powerless.

0
Back to Blog

© 2020 Teen Insider Mag. All rights reserved.