Correctional Offender Management Profiling For Alternative Sanctions
   HOME

TheInfoList



OR:

Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a case management and decision support tool developed and owned by Northpointe (now Equivant) used by
U.S. court The courts of the United States are closely linked hierarchical systems of courts at the federal and state levels. The United States federal courts, federal courts form the judicial branch of the US government and operate under the authority of the ...
s to assess the likelihood of a
defendant In court proceedings, a defendant is a person or object who is the party either accused of committing a crime in criminal prosecution or against whom some type of civil relief is being sought in a civil case. Terminology varies from one jurisdic ...
becoming a recidivist. COMPAS has been used by the U.S. states of New York, Wisconsin, California, Florida's
Broward County Broward County ( , ) is a county in the southeastern part of Florida, located in the Miami metropolitan area. It is Florida's second-most populous county after Miami-Dade County and the 17th-most populous in the United States, with over 1.94 ...
, and other jurisdictions.


Risk assessment

The COMPAS software uses an algorithm to assess potential recidivism risk. Northpointe created risk scales for general and violent recidivism, and for pretrial misconduct. According to the COMPAS Practitioner's Guide, the scales were designed using behavioral and psychological constructs "of very high relevance to recidivism and criminal careers." ; Pretrial release risk scale : Pretrial risk is a measure of the potential for an individual to fail to appear and/or to commit new felonies while on release. According to the research that informed the creation of the scale, "current charges, pending charges, prior arrest history, previous pretrial failure, residential stability, employment status, community ties, and substance abuse" are the most significant indicators affecting pretrial risk scores. ;General recidivism scale: The General recidivism scale is designed to predict new offenses upon release, and after the COMPAS assessment is given. The scale uses an individual's criminal history and associates, drug involvement, and indications of juvenile delinquency. ;Violent recidivism scale: The violent recidivism score is meant to predict violent offenses following release. The scale uses data or indicators that include a person's "history of violence, history of non-compliance, vocational/educational problems, the person’s age-at-intake and the person’s age-at-first-arrest." The violent recidivism risk scale is calculated as follows: s = a(-w) + a_(-w) + h_w + v_w + h_w where s is the violent recidivism risk score, w is a weight multiplier, a is current age, a_ is the age at first arrest, h_ is the history of violence, v_ is vocational education scale, and h_ is history of noncompliance. The weight, w, is "determined by the strength of the item’s relationship to person offense recidivism that we observed in our study data."


Critiques and legal rulings

Interventions of AI and algorithms in the court are usually motivated by
cognitive bias A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not the objective input, m ...
es such as hungry judge effect. In July 2016, the Wisconsin Supreme Court ruled that COMPAS risk scores can be considered by judges during sentencing, but there must be warnings given to the scores to represent the tool's "limitations and cautions." A general critique of the use of proprietary software such as COMPAS is that since the algorithms it uses are
trade secrets Trade secrets are a type of intellectual property that includes formulas, practices, processes, designs, instruments, patterns, or compilations of information that have inherent economic value because they are not generally known or readily as ...
, they cannot be examined by the public and affected parties which may be a violation of due process. Additionally, simple, transparent and more interpretable algorithms (such as
linear regression In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is call ...
) have been shown to perform predictions approximately as well as the COMPAS algorithm. Another general criticism of machine-learning based algorithms is since they are data-dependent if the data are biased, the software will likely yield biased results. Specifically, COMPAS risk assessments have been argued to violate 14th Amendment
Equal Protection The Equal Protection Clause is part of the first section of the Fourteenth Amendment to the United States Constitution. The clause, which took effect in 1868, provides "''nor shall any State ... deny to any person within its jurisdiction the equa ...
rights on the basis of race, since the algorithms are argued to be racially discriminatory, to result in disparate treatment, and to not be narrowly tailored.


Accuracy

In 2016,
Julia Angwin Julia Angwin is a Pulitzer Prize-winning American investigative journalist, New York Times bestselling author, and entrepreneur. She is co-founder and editor-in-chief of The Markup, a nonprofit newsroom that investigates the impact of technology ...
was co-author of a
ProPublica ProPublica (), legally Pro Publica, Inc., is a nonprofit organization based in New York City. In 2010, it became the first online news source to win a Pulitzer Prize, for a piece written by one of its journalists''The Guardian'', April 13, 2010P ...
investigation of the algorithm. The team found that “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” whereas COMPAS “makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower-risk but go on to commit other crimes.” They also found that only 20 percent of people predicted to commit violent crimes actually went on to do so. In a letter, Northpointe criticized ProPublica’s methodology and stated that: “
he company He or HE may refer to: Language * He (pronoun), an English pronoun * He (kana), the romanization of the Japanese kana へ * He (letter), the fifth letter of many Semitic alphabets * He (Cyrillic), a letter of the Cyrillic script called ''He'' ...
does not agree that the results of your analysis, or the claims being made based upon that analysis, are correct or that they accurately reflect the outcomes from the application of the model.” Another team at the
Community Resources for Justice Community Resources for Justice is a Massachusetts-based organization that has worked for over 130 years in social justice in issues like ex-offender re-entry, prison conditions, public safety, and crime prevention. CRJ was formed through the merge ...
, a criminal justice
think tank A think tank, or policy institute, is a research institute that performs research and advocacy concerning topics such as social policy, political strategy, economics, military, technology, and culture. Most think tanks are non-governmenta ...
, published a rebuttal of the investigation's findings. Among several objections, the CRJ rebuttal concluded that the Propublica's results: "contradict several comprehensive existing studies concluding that actuarial risk can be predicted free of racial and/or gender bias." A subsequent study has shown that COMPAS software is somewhat more accurate than individuals with little or no criminal justice expertise, yet less accurate than groups of such individuals. They found that: "On average, they got the right answer 63 percent of their time, and the group’s accuracy rose to 67 percent if their answers were pooled. COMPAS, by contrast, has an accuracy of 65 percent."


Further reading

* * *
Sample COMPAS Risk Assessment


See also

*
Algorithmic bias Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm. Bias can emerge from ...
*
Garbage in, garbage out In computer science, garbage in, garbage out (GIGO) is the concept that flawed, or nonsense (garbage) input data produces nonsense output. Rubbish in, rubbish out (RIRO) is an alternate wording. The principle applies to all logical argumentati ...
*
Legal expert systems A legal expert system is a domain-specific expert system that uses artificial intelligence to emulate the decision-making abilities of a human expert in the field of law. Legal expert systems employ a rule base or knowledge base and an inference e ...
* '' Loomis v. Wisconsin'' *
Criminal sentencing in the United States In ordinary language, a crime is an unlawful act punishable by a state or other authority. The term ''crime'' does not, in modern criminal law, have any simple and universally accepted definition,Farmer, Lindsay: "Crime, definitions of", in Can ...


References

{{reflist Criminal justice in the United States Government by algorithm Legal software