More than 60 percent of judges surveyed have incorporated AI tools into their judicial work, according to a new study that has raised concerns among legal experts about potential reliability issues.
The Washington Post reports that a growing number of judges across the United States are adopting AI technology to assist with various aspects of their judicial duties, from preparing for hearings to drafting legal rulings. The trend represents a significant shift in how the judiciary approaches case management and decision-making processes.
According to a recent study by Northwestern University, over 60 percent of surveyed judges reported using AI tools in their professional work. This widespread adoption comes despite ongoing concerns from some legal experts who worry that the technology’s known reliability issues could undermine judicial authority and the integrity of court proceedings.
Federal Judge Xavier Rodriguez, who serves in Texas, exemplifies this emerging trend in judicial AI usage. Rodriguez has made AI tools a regular part of his hearing preparation routine. His typical workflow involves inputting relevant court filings into an AI system, which then generates a timeline of the case and summarizes the claims being made by the various parties involved. This automated summary allows him to quickly review the key elements of a case before proceeding to a hearing.
The integration of AI into judicial work has not been without controversy. Legal experts and scholars have raised significant concerns about the potential risks associated with relying on artificial intelligence for judicial functions. One primary concern centers on the well-documented issue of AI unreliability, including instances where AI systems have generated false or misleading information, a common occurrence known as hallucinations.
Critics worry that if judges rely on AI-generated summaries or analyses that contain errors or omissions, it could lead to flawed legal reasoning or unjust outcomes. The potential for AI to misinterpret legal nuances or fail to capture important contextual details presents a particular challenge in the legal field, where precision and accuracy are paramount.
Breitbart News previously reported that after one major law firm was caught up in a scandal by lawyers citing nonexistent cases that had been hallucinated by AI, the firm’s leadership called the dangers of using AI “nauseatingly frightening:”
In response to the incident, Ayala was immediately removed from the case and replaced by his supervisor, T. Michael Morgan, Esq. Morgan expressed “great embarrassment” over the fake citations and agreed to pay all fees and expenses related to Walmart’s reply to the erroneous court filing. He emphasized that this incident should serve as a “cautionary tale” for both his firm and the legal community as a whole.
Morgan added, “The risk that a Court could rely upon and incorporate invented cases into our body of common law is a nauseatingly frightening thought.” He later admitted that AI can be “dangerous when used carelessly.”
Another concern relates to the potential impact on judicial authority and public confidence in the court system. Some experts fear that widespread use of AI in judicial decision-making could erode the perceived legitimacy of court rulings if the public believes that important legal determinations are being made by algorithms rather than human judges applying their expertise and judgment.
The study’s findings also raise questions about transparency and accountability in the judicial process. When AI tools are used to prepare rulings or analyze cases, it may not always be clear to litigants or the public what role the technology played in shaping the final decision. This lack of transparency could complicate efforts to appeal decisions or identify potential errors in legal reasoning.
Breitbart News social media director and author Wynton Hall argues in his instant bestseller, Code Red: The Left, the Right, China, and the Race to Control AI, that conservatives must develop a plan to deal with the bias baked into AI by leftists in Silicon Valley. That bias could impact how judges rule on cases if they rely on AI to guide their verdicts, and could impact our lives in other ways, such as how our children and grandchildren are taught in the classroom.
CODE RED covers a wide range of topics related to AI and bias, including:
- Why AI is wired for woke indoctrination—and how to resist it.
- How elites plan to weaponize AI job losses to push dependency.
- How America can beat China without becoming China.
- How to prepare your kids for the blinding speed of AI disruption.
- The new national security threats AI unleashes—and how we defend against them.
- Why “AI girlfriends” are luring millions—and what it will take to preserve authentic human connection.
- How AI will test faith and meaning—and why spiritual renewal may be its most surprising outcome.
Read more at the Washington Post here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.
Read the full article here


