Jury Summons

Jury Summons

Sunday, March 1, 2020

Warning, Bias Detected: AI in Voir Dire

AI in Voir Dire


The use of AI in the legal profession is not a work of fiction, nor is it a prediction for the future. It is a reality in the present. While lawyers are likely familiar with AI designed to locate on-point cases in seconds rather than hours, AI has now moved into the world of jury selection through Voltaire and Momus Analytics. Move over Bull, HAL has arrived. How does this technology even work? Could it reduce (or even eliminate) a lawyer's bias when selecting jurors?

How does it work?

 

Without getting into the incredibly technical differences between the various programs, a few things are generally true about how they function. AI programs use data analytics to predict juror behavior and give lawyers detailed reports about the juror in real time. Both scan social media and other sources are analyzed for a detailed analysis of each juror. Votlaire, for instance,
explores all public data related to the potential juror, correlates the data against known patterns in human behaviour and then produces a detailed profile, with indications of the type of person they are and how their views and biases may be a positive or negative factor as part of a jury.
Voltaire also allows lawyers to reject the AI's presuppositions when she believes they are inaccurate, and this instantly updates the recommended jury selection and profile. Features like this allow the human factor to judge nuances and non-verbal communication that might be missed by the program.

Reducing Bias in Juror Selection

 

We know that lawyers, like everyone else, have biases--whether implicit or explicit--, and we know that these impact jury selection. If this were not the case, the Batson Challenge would not exist. Lawyer bias is normally discussed regarding the ways in which potential jurors will be biased. Perhaps having AI select jurors would remove bias from the selection process.

The AI and its algorithms were programed by humans. Are the things it highlights actually the result of an impartial analysis of human behavior, or is it giving lawyers answers to questions they already ask? If so, what if the questions themselves are biased?

A tour of Voltaire's AI's operation displays a sample set of juror analysis indicating a "Job Related Risk" because an individual works in law enforcement and a "VT Insight" that this person is interested in trophy hunting. How the AI decides what associations are important is not completely clear. Is this really relevant because deep data analysis proves these associations to be significant indicators of human behavior, or is it because most defense lawyers do not believe a police officer could be impartial in a criminal case? The profiles also include voting records? Is this because voting records are the most statistically relevant signs of behavior or because lawyers stereotype people by them?

Doubling down on the importance of these associations, Momus actually states that the lawyer's performance cannot shape jury's verdict--apparently indicating that these associations absolutely determine juror decision-making. The reasoning seems to be that associations indicate juror biases and biases determine behavior. This oversimplifies the situation.

Even if these associations were indicative of biases and not rote stereotyping themselves, we know that biases are not dispositive of behavior. Implicit bias is believed to be an excellent predictor of behavior (Sarah Q. Simmons, Litigators Beware: Implicit Bias, 59 Advocate 35 (2016)).  For instance, jurors are implicitly biased to give lessor sentences to those in their same racial group. However, where lawyers or judges explain that the case is racially charged, this awareness changes the juror's behavior and we find that implicit bias does not impact the outcome. Lawyer performance changes the juror's awareness and therefore juror behavior. Consequently, Momus, and other AI, cannot know whether such associations will even be relevant in deliberations.

Conclusion


Using AI will probably not remove the lawyer's selection bias from voir dire. Unless these AI look at raw data to correlate the most important associations, they likely only reinforce stereotypes about biases that, even if present, may be overcome by lawyer performance. Additionally, the same features that allow lawyers to change profiles based on their own observations allow biases to be programed in at the user level. In any case, lawyers are likely to value the associations provided by the AI since they are playing the odds when selecting the jury and believe these associations have served them well in the past. Even though the machines are here, it seems selection bias is welcome to stay.


No comments:

Post a Comment