Articles Tagged with artificial intelligence

When prosecutors demanded that a California man who faced illegal gun possession charges be held without bail, the man’s defense team argued that the charges simply did not warrant such a harsh response. Prosecutors had pages and pages of explanations to support their argument. Unfortunately, for them, their extensive documentation was riddled with errors.  

AI Issues

As it happens, the prosecutor’s office was using AI to beef up paperwork in several cases, and in each situation, there were serious misinterpretations of law, as well as quotations that did not exist in the cited text. Ultimately, there were clear indications that AI was the culprit behind the mistakes. It led defense attorneys to take the case to the California Supreme Court, in hopes that they would find a pattern of erroneous legal interpretations and case citations. That led to some interesting revelations. 

Problems

Defense attorneys had 22 technology researchers and legal scholars alongside them in court. These professionals advised that the unchecked use of artificial intelligence in the legal field could lead to wrongful sentencing and convictions. Legal documents have been notably peppered with errors as a result of the use of Gemini and ChatGPT, which have been commonly used to prepare anything from essays and emails to legal briefs. When the use of AI goes unimpeded, the pitfalls can be disastrous, since these tools have been proven to contrive fictional answers to legal questions.

Arizona State University law professor Gary Marchant conceded that inaccuracies in court papers that are the result of AI are more likely an indication of negligence than deliberate deception. Nonetheless, because sycophancy is a known characteristic of AI, the truth is often stretched in an effort to divulge an answer that supports a specific argument. Commonly referred to as hallucinated content, roughly 600 cases have been detected worldwide, more than 60% of which occurred in U.S. courts. That leads to some gripping questions: 

  • Since studies indicate that as many as 82% of legal queries on chatbots result in hallucinations, prompting extreme caution from Supreme Court Chief Justice Roberts in 2023, can court documents created with AI be trusted?
  • With 75% of lawyers planning to use AI in their work, how will legal outcomes be affected?
  • Should there be restrictions on the use of AI in legal work, since even AI tools that claim to reduce hallucination issues produce errors in 17% to 34% of uses?

Continue reading

Contact Information