Biden’s FAA Nominee Withdraws After Botching Confirmation Testimony
You’ve Come A Long Way, (Trans)Baby!
A Quick Bible Study, Vol. 158: Hebrew Bible – Miraculous Story How...
Joe Biden: 'I Applaud China... Excuse Me, I Applaud Canada'
Trump Announces Texas Leadership Team, Snubs Cruz and Abbott
Republicans Have a Plan to Bury Biden's Student Loan Forgiveness Program Once and...
Three Years Since COVID-19
AOC Joins TikTok to Voice Her Unwanted Opinion On the App's Potential Ban
Missouri Issues Hotline to Report Abuse of Transgender Surgeries on Minors
We Need to Be More Judgmental
Why So Much Anti-Jewish Hatred?
DeSantis and Trump Both Dismiss the Idea of Being Each Other's 2024 Running...
MTG, Democrats Offer Two Different Views After Touring DC Jail Where J6 Defendants...
Pentagon Diversity Officer Won't Face Discipline for Anti-White Tweets
Jordan, Comer Respond to Woke DA Alvin Bragg, Accuse Him of Creating Danger...

Artificial Intelligence Produces Artificial Justice

The opinions expressed by columnists are their own and do not necessarily represent the views of
AP Photo/Richard Drew

Thanks to today’s “Internet of Things” (IoT), there is an “automation” for almost every aspect of our lives. From such mundane if not downright silly things as kitchen faucets that activate on voice command, to the impressive -- massive shipping warehouses run by robotics -- many aspects of life today go beyond that imagined decades ago in science fiction.  While we still are waiting for flying cars depicted in the Jetsons television show of the 1960s, or space hotels as portrayed in the sci-fi epic 2001, the array of technologically driven devices available to the average citizen is indeed impressive.

Yet, while automation and artificial intelligence simplifies or altogether eliminates many of the activities of day-to-day life, the technology complicates others. For example, how do you program a self-driving car in an emergency situation to choose between the life of a pedestrian or that of its “driver?” Even more complex are questions now being asked in the context of judicial systems; decisions cutting to the heart of individual liberty. As a Forbes article propositioned, what does justice look like if, or rather when, many aspects of judicial procedures, such as sentencing, are left to computer algorithms?

On the surface, injecting AI into certain legal procedures may appear to make sense for the same reasons it is used across other sectors of industry and professions.  In many arenas, artificial intelligence can process information far faster than humans, even while incorporating astronomically more data; and doing so without “human error.”  

Leaving aside for the moment the question of whether all human “error” should be eliminated from decision-making, advocates for such technology would ask why wouldn’t we want to use AI in a judicial system that constantly is being blamed for mis-judgments in determining guilt and then in sentencing decisions?

Already algorithms are used in the judicial system in areas such as risk-assessment and “predictive policing,” in which AI processes crime data to identify trends that can help improve patrol decisions and police staffing needs.

Clearly, there are positive and negative aspects to these AI-developments. For example, risk-assessments can help eliminate prejudice in assigning bail. On the other hand, we have seen the disastrous consequences of innocent people sucked into legal nightmares when predictive AI mistakes perfectly benign activities (like a science teacher’s shopping trip) as criminal conduct if certain boxes are checked. 

As with any computer-driven action, the output of algorithms and AI is only as good as what is input; and, just as more important, who is doing the inputting and why

What might a sentence look like from the perspective of an algorithm designed by the so-called law-and-order types, in which any infraction of a law, no matter the circumstances, warrants the full weight of the law in response? Or, what about the “zero-tolerance” gun control zealots who suspend children from school for making finger guns? Just look at the type of “justice” Democrats demand for President Trump, and imagine such a powerful tool as algorithmic sentencing guidelines crafted by them. 

Can “justice,” especially in the context of criminal law, which by its very nature balances individual liberty against government power, ever be reduced to a technologic formula?  Should it be thus degraded?

In a fundamental sense, determining whether all aspects of a crime exist in order to pursue prosecution and then doling out punishment should the defendant be found guilty, are merely aspects of the judicial process; they are not justice in and of itself.  In its truest sense, “justice” is a principle that ensures -- to the greatest human degree possible -- the right guilty party is brought before the courts, and that the resultant punishment is commensurate and reflective of the individual situation at hand.  

Justice reduced to algorithm is a two-dimensional reflection of a multi-dimensional condition.  No matter how sophisticated or expansive the data, AI cannot possibly factor in such relevant circumstances as motive, mens rea (that is, a guilty state of mind), or even the fairness of the law itself. 

Experience following the adoption of federal Sentencing Guidelines in the late 1980s is highly relevant to any consideration of imposing AI on judicial proceedings.  These guidelines were the culmination of a multi-year process to standardize and streamline sentencing for defendants in federal trial court proceedings, but have required numerous and extensive revisions ever since.  They ultimately were deemed by the U.S. Supreme Court to be unconstitutional as mandatory “guidelines.”  This example alone should cause efforts to “standardize” fundamental aspects of our legal system via “artificial intelligence” to be viewed with extreme caution; and, in my view, ultimately discarded. 

Join the conversation as a VIP Member


Trending on Townhall Video