Not Again! Two More Cases, Just this Week, of Hallucinated Citations in Court Filings Leading to Sanctions

Key Points:

  • Generative AI impacting the legal profession by exposing lazy and incompetent lawyers.
  • Cases of lawyers relying on fictitious cases generated by ChatGPT leading to consequences like case dismissals and sanctions.
  • Importance of lawyers understanding and verifying AI-generated content before submission to avoid ethical and professional pitfalls.

Summary:

The utilization of generative artificial intelligence (AI) in legal research has come under scrutiny following multiple cases where lawyers unwittingly cited fictitious cases generated by AI systems, such as ChatGPT. Despite several high-profile incidents exposing these inaccuracies, recent occurrences in Missouri and Massachusetts indicate a persistent trend.

 

In the Missouri case of Kruse v. Karlen, an unrepresented litigant, Jonathan Karlen, filed an appellate brief containing 22 out of 24 fictitious case citations. The Missouri Court of Appeals, presided over by Judge Kurt S. Odenwald, highlighted the egregious nature of Karlen’s submission, which included made-up generic case names like ‘Smith v. ABC Corporation.’ Despite Karlen’s claim of receiving these citations from an online consultant posing as a licensed attorney, the court deemed his actions an abuse of the judicial system. Consequently, the court dismissed Karlen’s appeal and imposed a $10,000 sanction for his frivolous conduct.

 

Similarly, in the case of Smith v. Farwell in Massachusetts, plaintiff’s counsel filed multiple memoranda with fictitious case citations, attributing the errors to an “unidentified AI system” used by individuals in his office, including recent law school graduates and an associate. Judge Brian A. Davis raised concerns about the accuracy of these citations, leading to a subsequent hearing. While the attorney acknowledged his oversight and lack of due diligence in verifying the AI-generated content, the court decided to impose a $2,000 sanction for violating standards of conduct and duty under Rule 11.

 

Judge Davis, emphasizing the broader implications of these incidents, stressed the necessity for attorneys to exercise caution and verify the authenticity of AI-generated content before submission. Failure to uphold these standards could result in further sanctions and undermine the credibility of legal practitioners. As the legal profession grapples with the evolving role of AI in legal research, this serves as a cautionary tale highlighting the importance of maintaining ethical and professional standards in the digital age.

 

The repercussions faced by Karlen and the plaintiff’s counsel underscore the critical need for legal practitioners to adapt to technological advancements responsibly. Moving forward, the legal community must prioritize rigorous oversight and verification processes to uphold the integrity of the legal system and avoid the pitfalls associated with reliance on AI-generated content. By learning from these incidents, attorneys can navigate the evolving landscape of legal technology with vigilance and diligence, ensuring the preservation of core principles and ethics in legal practice.

DAILY LINKS TO YOUR INBOX

PROMPT ENGINEERING

Prompt Engineering Guides

ShareGPT

 

©2024 The Horizon