Small Project Artificial Unintelligence1
docx
keyboard_arrow_up
School
Syracuse University *
*We aren’t endorsed by this school
Course
343
Subject
Industrial Engineering
Date
Jan 9, 2024
Type
docx
Pages
3
Uploaded by CorporalPolarBearPerson956
Jesse Brown
IST 343
Professor Tacheva
12/5/21
Small Project:
Artificial Unintelligence
In recent decades, the use of algorithmic risk assessment has become a widely
used tool in the US Department of Justice when determining the outcome of a criminals
pretrial and even sentencing. Most states have adopted the use of a risk assessment
algorithm to help a judge determine the pretrial outcomes such as the bail amount one
must pay to be released before trial and even the option to be released on bail at all.
Some states even use these risk ratings when determining the sentence of a defendant,
allowing judges to cite the defendant’s risk score as evidence. However, these
algorithmic risk assessments have been found to be heavily racially biased, often
making unfair determinations about a defendant that have been correlated with their
race. Algorithmic risk assessments are a racially biased means of determining the future
criminal risk of an individual and should not be used in a justice system that is already
riddled with unjust racial bias.
Algorithmic risk assessments such as the one created by the for-profit company
Northpointe were put in use to counteract the personal biases of the humans involved in
the legal process. The goal was to have a computer algorithm accurately determine the
likelihood of a defendant committing another crime so that less people could be
incarcerated in general and so the process was fairer and standardized. The algorithms
are generally based on a lengthy series of questions that are answered by the
defendants directly or found in their criminal record. Interestingly enough, the questions
did not ask about race at all, yet there were numerous statistics that showed clear signs
of the algorithms displaying racial bias. For example, according to a ProPublica article
titled
Machine Bias
when talking about Northpointe’s statistics from a county in Florida
of those labeled low or high risk and those who either re-offended or did not, they write,
“...blacks are almost twice as likely as white to be labeled a higher risk but not actually
re-offend. It makes the opposite mistake among white: They are much more likely than
blacks to be labeled lower risk but go on to commit other crimes.”
(ProPublica, 2020b)
Based on the statistics the quote refers to, 44.9% of African Americans were labeled
high risk but didn’t reoffend while that number was 23.5% for Whites. This egregious
racially biased prediction error combined with the algorithm's accuracy of a mere 61% in
predicting recidivism, makes Northpointe’s product, called COMPAS, certainly not worth
the goal of reducing incarceration rates when the algorithm clearly ends up
incarcerating the wrong people and even missing those that may require incarceration.
States have even let judges go as far as citing a defendant’s risk score when deciding
their sentencing. For example, in the case of Paul Zilly who stole a push lawnmower
after relapsing on his meth habit that he had been working on to get over with his
Christian pastor. Zilly’s score indicated a high risk for violent future crimes and was
sentenced to two years in prison, which he appealed and with the help of the algorithms
creator as a witness, he was able to get his sentence reduced to 18 months. These
different risk assessment algorithms such as Northpointe’s COMPAS are used across
the United States, yet there are very few studies that actually prove the algorithms
accuracy. According to a ProPublica article that explains their analysis of Northpointe’s
COMPAS algorithm, “Across every risk category, black defendants recidivated at higher
rates.”
(ProPublica, 2020a)
This at the least shows how our country is plagued with
systemic racism in a way where African Americans are both pushed towards crime and
then treated unfairly through the legal process.
ProPublica’s article analyzing the various algorithms that our Department of Justice and
law enforcement use to assess defendants through a mix of statistics based on data
collection and first hand accounts is a good example of the computer-assisted
journalism ethos. The journalists have developed their database to analyze and have
used the data they found to support their primary source accounts. Therefore, they were
not solely relying on the data they collected and were able to use their analysis as a
means of proof to their interviews. This article has a huge effect on both the US
Department of Justice, African Americans, and racial advocacy groups. The article
provides solid proof of the clear biases in our legal process against African Americans
and can hopefully help leverage change within our legal system. However, until a more
accurate and validity tested risk assessment algorithm is created, it is unjust to keep in
place the biased systems that are used today in our legal process and they should be
suspended or at least disregarded a little more until proven valid.
Bibliography
ProPublica. (2020a, February 29).
How We Analyzed the COMPAS Recidivism Algorithm
.
https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
ProPublica. (2020b, February 29).
Machine Bias
.
https://www.propublica.org/article/machine-
bias-risk-assessments-in-criminal-sentencing
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help