AI in Voir Dire
How does it work?
Without getting into the incredibly technical differences between the various programs, a few things are generally true about how they function. AI programs use data analytics to predict juror behavior and give lawyers detailed reports about the juror in real time. Both scan social media and other sources are analyzed for a detailed analysis of each juror. Votlaire, for instance,
explores all public data related to the potential juror, correlates the data against known patterns in human behaviour and then produces a detailed profile, with indications of the type of person they are and how their views and biases may be a positive or negative factor as part of a jury.Voltaire also allows lawyers to reject the AI's presuppositions when she believes they are inaccurate, and this instantly updates the recommended jury selection and profile. Features like this allow the human factor to judge nuances and non-verbal communication that might be missed by the program.
Reducing Bias in Juror Selection
We know that lawyers, like everyone else, have biases--whether implicit or explicit--, and we know that these impact jury selection. If this were not the case, the Batson Challenge would not exist. Lawyer bias is normally discussed regarding the ways in which potential jurors will be biased. Perhaps having AI select jurors would remove bias from the selection process.
The AI and its algorithms were programed by humans. Are the things it highlights actually the result of an impartial analysis of human behavior, or is it giving lawyers answers to questions they already ask? If so, what if the questions themselves are biased?
A tour of Voltaire's AI's operation displays a sample set of juror analysis indicating a "Job Related Risk" because an individual works in law enforcement and a "VT Insight" that this person is interested in trophy hunting. How the AI decides what associations are important is not completely clear. Is this really relevant because deep data analysis proves these associations to be significant indicators of human behavior, or is it because most defense lawyers do not believe a police officer could be impartial in a criminal case? The profiles also include voting records? Is this because voting records are the most statistically relevant signs of behavior or because lawyers stereotype people by them?
Doubling down on the importance of these associations, Momus actually states that the lawyer's performance cannot shape jury's verdict--apparently indicating that these associations absolutely determine juror decision-making. The reasoning seems to be that associations indicate juror biases and biases determine behavior. This oversimplifies the situation.
Even if these associations were indicative of biases and not rote stereotyping themselves, we know that biases are not dispositive of behavior. Implicit bias is believed to be an excellent predictor of behavior (Sarah Q. Simmons, Litigators Beware: Implicit Bias, 59 Advocate 35 (2016)). For instance, jurors are implicitly biased to give lessor sentences to those in their same racial group. However, where lawyers or judges explain that the case is racially charged, this awareness changes the juror's behavior and we find that implicit bias does not impact the outcome. Lawyer performance changes the juror's awareness and therefore juror behavior. Consequently, Momus, and other AI, cannot know whether such associations will even be relevant in deliberations.
No comments:
Post a Comment