The problem of judicial inconsistency in the multi-layered justice system of the US is unalterably problematic, with accusations of institutional biases and huge gaps in scrutiny. A mechanically accurate means of predicting re-offending would be theoretically indispensable in targeting the very rhizomes of mass incarceration.
Since the conception of recidivist algorithms in 1998, AI has been posited as the future of prediction. A recent study has seemingly reaffirmed the necessity of these systems, which a 2018 study threw into doubt. Sharad Goel, of Stanford University, presented basic criminal profiles to 50 crowdsourced volunteers. With instant feedback, the volunteers performed as well as they had in the previous tests, matching COMPAS’ 65% accuracy in prediction, and an 83% accuracy with violent crimes. However, without feedback, human accuracy with violent crimes fell to 60%. Despite being told that the rate of violent reoffending was only 11% of the pool, the volunteers drastically overestimated the rates of violent crime, indicating the contamination of human fear in the estimation. In a third test a more advanced program, LSI-R (with 10 times as many variables to consider) was employed. Human estimations at re-incarceration, obviously more important, meagerly dropped to 58%, compared to LSI-R’s 74%.
Notwithstanding these findings, there are obvious limitations to the study. Crowdsourced volunteers do not simulate the correct judicial context for a proper evaluation, where the personal attitude of the offender, details of the case and, of course, strength of legal defence are hugely significant in verdict.
Ironically, the algorithmic estimation, virtuous in clarity and the aforementioned lack of overloading with information, could actually complicate the process further. As Goel indicates, there may a ceiling in how much statistics can predict recidivism, Henry Farid and Julia Dresser, leading the 2018 study, found that an algorithm simple enough to squeeze onto the back of a business card performed as well as COMPAS, expressing doubts over the possibility of AI making significant strides in recidivism.
Alarmingly, the algorithms express a racial bias, with black defendants twice as likely to be inaccurately judged (even though race wasn’t included in the study, there are other values that correlate race) Personally, I think this an unintelligent use of artificial intelligence, c'mon guys, where’s my pocket therapist? Robocop?