The Perils of AI Hallucinations in Legal Practice: Lessons from Walmart
Integrating artificial intelligence (AI) into legal practice has introduced remarkable efficiencies, streamlining research, drafting, and case analysis. However, it has also given rise to significant risks, as illustrated by the recent sanctions faced by attorneys at Morgan & Morgan in a Wyoming lawsuit against Walmart. This case involved AI-generated fictitious legal citations, underscoring the ethical and practical challenges of relying on generative AI without proper oversight. The incident has reignited debates about AI literacy, professional responsibility, and the urgent need for safeguards in legal workflows.
The Walmart Case: A Costly AI Mistake
In February 2025, a Federal judge in Wyoming threatened sanctions against two Morgan & Morgan attorneys after discovering fictitious case citations in a motion related to a defective hoverboard lawsuit against Walmart. The attorneys admitted that their internal AI platform had “hallucinated” eight nonexistent cases, which they included in court filings without verification. The motion in limine — a pretrial request to exclude evidence — relied on fabricated precedents that the court could not locate in legal databases or its records.
The attorneys claimed the mistake was inadvertent, explaining that the AI tool generated plausible-sounding citations that appeared legitimate. However, the judge emphasized their failure to verify the citations violated procedural integrity and ethical obligations. While sanctions remain pending, the firm has warned its attorneys that unverified AI use in court filings could result in termination.
A Growing Pattern of AI-Generated Legal Errors
The Walmart case is not an isolated incident. Since 2023, courts have sanctioned lawyers in multiple cases for submitting AI-generated falsehoods. In a 2023 New York case, two attorneys were fined $5,000 for citing ChatGPT-invented cases in an airline injury lawsuit. Former Trump lawyer Michael Cohen and his attorney narrowly avoided sanctions after using fake citations from Google’s AI chatbot. In Texas, a lawyer was ordered to pay $2,000 and attend AI training for citing hallucinated cases in a wrongful termination suit.
These cases highlight a systemic issue: as lawyers increasingly rely on AI for drafting and research, some assume the technology’s outputs are inherently reliable—an assumption that has proven dangerously flawed.
Ethical and Professional Failures in AI Use
The American Bar Association’s (ABA) Model Rule 1.1 mandates that lawyers provide competent representation, which includes understanding the risks and limitations of the technologies they employ. In the Walmart case, the attorneys failed to verify AI outputs, as they did not cross-reference citations with trusted legal databases like Westlaw or LexisNexis. They also lacked an understanding of AI’s limitations, as generative AI models like ChatGPT do not retrieve factual data but rather predict text patterns based on training data. Lawyers risk accepting statistically plausible but factually false information without recognizing this distinction.
Additionally, the Morgan & Morgan attorneys did not supervise AI use in their work, a failure that contradicts the ABA’s Formal Opinion 512 (2024), which stresses that attorneys must maintain oversight of AI tools, ensuring outputs align with legal standards. Under Model Rule 3.3, lawyers must not knowingly present false evidence to courts. While the Walmart attorneys claimed the error was unintentional, their failure to scrutinize the AI’s output constituted a lapse in due diligence. Judges increasingly view uncorrected AI hallucinations as ethical violations, regardless of intent.
How AI “Hallucinates” Legal Precedents
AI hallucinations occur when large language models (LLMs) generate false information that appears factual. Unlike traditional legal research databases, LLMs do not retrieve verified case law but rather predict text sequences based on training data patterns. This can lead to fabricated case names, courts, and quotes that mimic actual legal texts but lack a factual basis.
These hallucinations occur due to gaps in training data, overconfidence in language patterns, and a misinterpretation of legal context. Without the ability to truly comprehend the law, AI may generate citations that appear convincing but are entirely fictional.
The Illusion of Authority: Why Lawyers Are Misled
AI-generated outputs often sound authoritative, mimicking the structure and terminology of legal documents. This illusion of credibility can deceive even experienced attorneys, as seen in the Walmart case. A 2023 Thomson Reuters survey found that while 63 percent of lawyers use AI in their practice, only 12 percent do so regularly—suggesting that many lack familiarity with its potential pitfalls. This gap in understanding increases the risk of attorneys relying on AI-generated material without implementing the necessary verification steps.
Legal and Financial Consequences of AI Misuse
Judges and clients expect attorneys to uphold rigorous standards of accuracy. The Walmart incident damaged Morgan & Morgan’s reputation, with the court criticizing the firm’s “embarrassing” reliance on AI. Such errors undermine public confidence in legal institutions and risk prejudicing clients’ cases.
Beyond reputational harm, attorneys face increasing financial and professional consequences for AI misuse. Sanctions, fines, and disciplinary actions are becoming common, and attorneys may also face malpractice claims if AI errors negatively impact clients.
Mitigating Risks: Best Practices for Responsible AI Use
Legal professionals must adopt verification protocols when using AI to mitigate these risks. The ABA and state bars emphasize the importance of cross-checking all citations through authoritative legal research platforms. Attorneys must also ensure client confidentiality by redacting sensitive information before inputting data into AI tools. Law firms must invest in AI literacy programs to educate attorneys on hallucination risks and verification techniques. These safeguards help ensure that AI is a valuable tool rather than a liability.
AI can enhance efficiency by assisting with drafting, summarizing documents, and identifying research avenues, but it should never replace the expertise and judgment of attorneys. Lawyers must retain final judgment over their legal analysis, using AI as a supplement rather than a substitute. Some courts now require attorneys to disclose when AI has been used in legal filings, reinforcing the necessity of transparency in adopting these technologies.
Balancing Innovation with Accountability
The Walmart case is a cautionary tale for legal professionals navigating the AI era. While generative AI offers transformative potential, its integration demands heightened diligence, ethical rigor, and technological literacy. AI is a powerful tool but remains a flawed assistant, not an infallible authority. By combining AI’s efficiencies with human expertise, the legal profession can harness innovation while preserving the integrity of judicial processes.
Ongoing education, regulatory updates, and institutional safeguards will be critical as AI evolves. The future of legal practice lies not in rejecting AI but in mastering its responsible use—a challenge that requires perpetual vigilance and adaptability.
Want more insights like these? Explore the world of AI for business leadership in my book, From Data to Decisions: AI Insights for Business Leaders. It’s a curated collection of strategies and lessons from my LinkedIn articles published in 2024, available now on Amazon at
https://a.co/d/3r49Cuq.
Want to learn more? Join our Wait List for our Printed Monthly Newsletter, Innovation Circle.
|