Two federal judges have admitted that members of their staff used artificial intelligence (AI) to draft court orders over the summer that proved to be factually inaccurate.

“Honesty is always the best policy,” Senate Judiciary Committee Chairman Chuck Grassley (R-IA) said in a statement released Thursday. “I commend Judges Wingate and Neals for acknowledging their mistakes and I’m glad to hear they’re working to make sure this doesn’t happen again.”

The statement continued:

Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law. The judicial branch needs to develop more decisive, meaningful and permanent AI policies and guidelines. We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy. As always, my oversight will continue.

The inaccuracies generated by the AI included the use of earlier draft opinions by the judges that were not meant for final submission.

AI-generated court orders represent a reversal the scrutiny judges across the country have reportedly been implementing against lawyers for its use in filings. Some judges have issued fines or other sanctions for AI use in cases.

The two judges said the rulings in the cases, which were not connected, did not go through their chambers’ usual review processes before they were released, according to their letters provided by the judiciary committee.

Judge Julien Xavier Neals wrote in his letter that a June 30 draft decision in a securities lawsuit “was released in error – human error – and withdrawn as soon as it was brought to the attention of my chambers.”

The judge said a law school intern, without authorization or disclosure, used OpenAI’s ChatGPT to perform legal research, contrary to policy.

“My chamber’s policy prohibits the use of GenAI in the legal research for, or drafting of, opinions or orders,” Neals wrote. “In the past, my policy was communicated verbally to chamber’s staff, including interns. That is no longer the case. I now have a written unequivocal policy that applies to all law clerks and interns.”

Judge Henry T. Wingate wrote in his letter that a law clerk used the writing AI assistant Perplexity “as a foundational drafting assistant to synthesize publicly available information on the docket,” adding that releasing the July 20 draft decision “was a lapse in human oversight.”

“This was a mistake,” the judge stated. “I have taken steps in my chambers to ensure this mistake will not happen again.”

Contributor Lowell Cauffiel is the author of the New York Times best seller House of Secrets and nine other crime novels and nonfiction titles. See lowellcauffiel.com for more

Read the full article here

Share.
Leave A Reply

Exit mobile version