COMPAS—Correctional Offender Management Profiling for Alternative Sanctions—is a criminal justice algorithm meant to assess potential recidivism risk and assist judges in their decision-making processes. The tool is used in multiple states and jurisdictions, including New York, Wisconsin, California, and Florida's Broward County. Over 1 million defendants have had their information processed by the algorithm since its inception in 1998.
In 2016, COMPAS came under national scrutiny when ProPublica published their now-famous article "Machine Bias." The piece argued that the COMPAS algorithm was biased against African American defendants, as black defendants were twice as likely as white ones to be incorrectly labelled as "high-risk." These accusations led to a response from COMPAS's creators and various academics. The controversy even attracted national media attention in the Washington Post.
Such discussions led many to ask: How does COMPAS work? (A common criticism levelled against these algorithms is that we never really know.) The algorithm is powered by a 137-question survey. For each case, the algorithm considers a variety of factors such as offense history, age, sex, and neighborhood context.
But how effective is COMPAS? In 2018, two researchers evaluated the algorithm's relative accuracy in predicting two-year recidivism rates. They did so by asking untrained and unfamiliar subjects to look at a subset of features—seven, to be exact—and to predict that same recidivism rate. Their findings? These "random" participants performed as well as or better than COMPAS did. While COMPAS correctly guessed future activity 65% of the time, human participants had prediction accuracy of 67%.
How do you compare? Below, I'll show you five characteristics (132 less than COMPAS uses). Use that information to guide your decision as to whether the defendant at hand will recidivate in the next two years or not. Are you smarter than COMPAS?