Politics · Technology

Automated Justice: The Role of Artificial Intelligence In The US Justice System

(Quick crash course on what artificial intelligence is for those who might need a refresher.) 

According to this Hubspot article released a couple of weeks ago, here’s a list of jobs that they think will most likely be taken by artificial intelligence (AI) in the next few years.

  1. Telemarketers
  2. Bookkeeping Clerks
  3. Compensation and Benefits Manager
  4. Receptionist
  5. Couriers
  6. Proofreaders
  7. Computer Support Specialists
  8. Market Research Analysts
  9. Advertising Salespeople
  10. Retail Salespeople

Here’s a list of jobs I think AI will take in the next 10 years:

  1. Wealth advisers
  2. Lawyers
  3. Pilot
  4. Tax Auditor
  5. Truck Driver
  6. Taxi Drivers
  7. Investment Banker
  8. Doctors?
  9. Computer Programmer
  10. Musician

If  we get to the point we can depend on AI to take care of our health, transportation, taxes, money, and entertainment, why not go all the way and allow AI to enter areas of public institutions like our justice system? I thought I wouldn’t find much research on AI in the judicial system. It would be the one place AI dare not touch. Wrong.

Here’s an expert of an article from the Guardian. I’ts about 9 months old.

The AI “judge” has reached the same verdicts as judges at the European court of human rights in almost four in five cases involving torture, degrading treatment and privacy.

The algorithm examined English language data sets for 584 cases relating to torture and degrading treatment, fair trials and privacy. In each case, the software analysed the information and made its own judicial decision. In 79% of those assessed, the AI verdict was the same as the one delivered by the court.

The article goes on to say:

Dr Nikolaos Aletras, the lead researcher from UCL’s department of computer science, said: “We don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes.

“It could also be a valuable tool for highlighting which cases are most likely to be violations of the European convention on human rights.” An equal number of “violation” and “non-violation” cases were chosen for the study.

So it’s happening. There are people thinking, researching, and applying AI to judicial processes. Based on this particular simulation,  its not that far off either. At 79% of verdicts in alignment with human judge verdicts,  it will only get better and most likely serve as a qualifier/screening tool for cases that should be evaluated by human rights judges.

Dr. Aletras’ work seems very academic and more research focused with limited impact on the day to day European Human Rights Courts. Maybe in the future, they’ll be some application but not today.  That may be the case for AI as a judge but how about other parts of the judicial system?

Then, I came upon this article in the New York Times that discusses how AI already plays a significant role in the judicial process. It’s a couple weeks old. AI systems are used from everything to evaluating evidence like DNA and fingerprints, to deploying police officers in the most efficient manner. Here’s a quick story from one of the applications that shows one of the key challenges:

“Take the case of Glenn Rodriguez. An inmate at Eastern Correctional Facility in upstate New York, MR. Rodriguez was denied parole last year despite having a nearly perfect record of rehabilitation . The reason? A high score from a computer system called Compas. The company that makes Compas considers the weighting of inputs to be proprietary information. That force Mr. Rodriguez to rely on his own ingenuity to figure out what had gone wrong. 

This year, Mr. Rodriguez returned to the parole board with the same faulty Compas score He had identified an error in one of the inputs for his Compas assessment. But without knowing the input weights, he was unable to explain the effect of his error, or persuade anyone to correct it. Instead of challenging the result, he was left to try to argue for parole despite the result. “

Did Mr. Rodriguez deserve parole? Based on traditional parole metrics, yes. He has near perfect record of rehabilitation. Based on Compas, a private company that essentially tries to predict likelihood of recidivism based on “proprietary data” and “algorithms”, Mr. Rodriguez stood a higher than usual chance of coming back to jail and thought it would just make more sense to keep him there. Have you spotted the problem yet?

Oscar the Grouch - Garbage IN  Garbage Out

Back in the day, when I aspired to play point guard in the NBA, I would focus on shooting a ton of free throws. I thought if I could just get a high volume of free throws, I could increase my percentage. I was missing a ton of free throws and it didn’t look like shooting more was helping. It wasn’t until my eighth grade coach told me, ” Practice doesn’t make perfect, perfect practice makes perfect.” I can shoot all I want but if I have garbage form, I’m just practicing garbage form and wasting my time. As most of you know, my basketball career ended in retirement in eighth grade, but that lesson has stayed with me and has ample significance to AI and machine learning.

In order to improve AI and machine learning algorithms, they must be trained by real data. Specifically in the justice system, companies will work with the state and federal government to train and develop all types of algorithms. The problem is these systems often compound societal and institutional realities they are supposed used to prevent. They may be trained with a high volume of data, but its just like me shooting with bad form.

Remember Mr. Rodriguez? Lets take a look at ProPublica’s evaluation of Compas’s Recidivism Algorithm to see if we can see if there are any insights into Compas’s performance.(Give it a read if you have a chance) Here’s the summary of their analysis:

“Our (ProPublica) analysis found that:

  • Black defendants were often predicted to be at a higher risk of recidivism than they actually were. Our analysis found that black defendants who did not recidivate over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45 percent vs. 23 percent).
  • White defendants were often predicted to be less risky than they were. Our analysis found that white defendants who re-offended within the next two years were mistakenly labeled low risk almost twice as often as black re-offenders (48 percent vs. 28 percent).
  • The analysis also showed that even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 45 percent more likely to be assigned higher risk scores than white defendants.
  • Black defendants were also twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism. And white violent recidivists were 63 percent more likely to have been misclassified as a low risk of violent recidivism, compared with black violent recidivists.
  • The violent recidivism analysis also showed that even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 77 percent more likely to be assigned higher risk scores than white defendants.”

How interesting, the analysis from ProPublica looks like it mimics some of the realities we see in our justice system. This shouldn’t be surprising, the Compas algorithm was most likely trained using data from states that most likely have laws, procedures, convictions, and outcomes in place that disproportionately affects males, and people of color and people in urban areas. It most likely has data points from over-policed areas.

When we leverage AI and machine learning, for any industry, we have to make sure we don’t allow the flaws in our institutions to creep into the systems we develop. If we do, the solutions are causing more harm than good.

 

-ProPublica published the calculations and data for this analysis on github

Leave a Reply

Your email address will not be published. Required fields are marked *