Judges are cracking down on attorneys who fail to properly fact-check the output of AI tools used in legal research and document preparation. Lawyers are submitting legal documents to judges that include completely made up case citations that have been “hallucinated” by AI tools like ChatGPT.

404Media reports that in recent weeks, judges have been cracking down on lawyers who have relied on AI to generate citations to court cases that do not actually exist. These “hallucination cites,” as they have been dubbed, have led to sanctions and fines for attorneys who failed to conduct due diligence in verifying the authenticity of the cases produced by AI.

The latest incident involves attorney Rafael Ramirez, who represented HoosierVac in an ongoing case against the Mid Central Operating Engineers Health and Welfare Fund. In October 2024, Ramirez filed a brief that cited a case the judge was unable to locate. Although Ramirez acknowledged the error, withdrew the citation, and apologized to the court, a further review of his filings revealed that he had included fictitious cases in two other briefs as well.

According to court documents filed by U.S. Magistrate Judge Mark Dinsmore of the Southern District of Indiana, Ramirez explained that he had previously used AI to assist with legal matters and was unaware that AI could generate fictitious cases and citations. Despite the apparently credible text excerpts, Ramirez admitted to not conducting further research or attempting to verify the existence of the generated citations.

Judge Dinsmore emphasized that Ramirez’s failure to make a reasonable inquiry into the law was unacceptable, stating that even minimal effort would have revealed the non-existence of the AI-generated cases. As a result, the judge recommended sanctions of $15,000 against Ramirez.

This incident follows a similar case in January, where attorneys filed court documents citing a series of non-existent cases as part of a lawsuit against a hoverboard manufacturer and Walmart. In February, U.S. District Judge Kelly demanded an explanation from the attorneys and considered sanctions. The attorneys admitted to using AI to generate the cases without catching the errors and referred to the incident as a “cautionary tale” for the legal profession.

Last week, Judge Rankin issued sanctions against the attorneys involved in the hoverboard case. One attorney was removed from the case, while the other three attorneys were fined between $1,000 and $3,000 each.

These recent cases highlight the growing concern over the use of AI in the legal profession and the importance of human oversight and fact-checking. While AI can be a valuable tool for lawyers, it is crucial that attorneys understand its limitations and use it responsibly, ensuring that all cited cases and legal precedents are accurate and authentic.

As Judge Dinsmore noted, the use of AI must be consistent with counsel’s ethical and professional obligations, and it must be accompanied by the application of actual intelligence in its execution. The legal community must adapt to the increasing presence of AI while maintaining the highest standards of professionalism and accuracy in their work.

Read more at 404Media here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version