Lawyer Cites Fake Cases Generated by ChatGPT in Legal Brief : A Cautionary Tale

ChatGPT generates fake legal cases

In an age where artificial intelligence (AI) is becoming increasingly prevalent in various industries, it was only a matter of time before the legal field began to experience the effects of AI-powered tools. However, as a recent case involving a New York lawyer demonstrates, relying too heavily on AI without proper verification can lead to disastrous consequences. This article delves into the fascinating story of a lawyer who used OpenAI‘s ChatGPT to supplement his legal research, only to discover that the chatbot had generated fake cases, which he then cited in a legal brief.

The ChatGPT Debacle: A Brief Overview

Steven A. Schwartz, a lawyer from the firm Levidow, Levidow & Oberman, found himself in hot water after using OpenAI’s ChatGPT to supplement his legal research in a personal injury lawsuit filed by Roberto Mata against Colombian airline Avianca. In preparing a response to Avianca’s motion to dismiss, Schwartz consulted ChatGPT, which provided him with legal sources, opinions, and citations. Unbeknownst to Schwartz, six of the cases ChatGPT provided were bogus. Judge P. Kevin Castel called this “an unprecedented circumstance” and scheduled a hearing to determine possible sanctions.

The Unreliable AI: ChatGPT’s Role in the Fiasco

Schwartz claimed in an affidavit that he had never used ChatGPT for legal research before and was unaware that the AI could produce false information. When he asked the AI-powered chatbot if the cases it provided were real, ChatGPT assured him of their authenticity, stating that they could be found in reputable legal databases like LexisNexis and Westlaw. Only later did Schwartz realize that ChatGPT’s assurances were unreliable, and he admitted that he should have verified the cases’ authenticity independently.

The Fallout: A Front-Page Story and a Scheduled Hearing

Schwartz’s blunder caught the attention of The New York Times, which published a front-page story on the incident. With a hearing scheduled to determine possible sanctions, the legal community is left wondering how such a mistake could have been made—and what it means for the future of AI in the legal profession.

Verify, Verify, Verify: The Importance of Independent Verification

One critical error Schwartz made was failing to independently verify the cases ChatGPT provided during his research. Kay Pang, an experienced in-house counsel at VMware, emphasized the importance of verification, stating on LinkedIn that the lawyer “didn’t follow the most important rule—Verify, verify, verify!” Nicola Shaver, the CEO and co-founder of Legaltech Hub, echoed this sentiment, stressing the need for independent verification rather than simply relying on ChatGPT’s assurances.

Ashley Pantuliano, OpenAI Associate General Counsel, had previously mentioned in a Lawtrades webinar that ChatGPT could be a helpful starting point for legal research, but warned that it could sometimes produce inaccurate information. According to Pantuliano, lawyers using the tool should be aware of its limitations and always verify the information provided.

A Lesson Learned: Schwartz’s Regret and Future Approach

In his affidavit, Schwartz expressed great regret for using artificial intelligence to supplement his legal research and vowed never to use AI without absolute verification of its authenticity in the future. He acknowledged that his reliance on ChatGPT’s assurances was a mistake and that he should have been more cautious in trusting the AI-generated cases.

Artificial Intelligence in Legal Work: Don’t Discount the Benefits

Despite Schwartz’s unfortunate experience, legal professionals should not dismiss the potential benefits of AI in the legal field. Alex Su, the head of community development at Ironclad, warned against developing a mindset that equates all AI tools with ChatGPT’s shortcomings. He argued that there are many AI-powered legal tech tools from companies with a track record of success in the legal industry.

While acknowledging that some generative AI products may not be 100% reliable, Su suggested that vendors would be incentivized to warn users about accuracy rates and provide more reliable solutions specifically tailored for legal use cases. He encouraged lawyers to approach AI with a “learning mindset” and remember that it can be most effective as a “first pass or first draft tool.”

Nicola Shaver also emphasized the importance of not “throwing the baby out with the bathwater” when it comes to AI in the legal profession. She urged attorneys to educate themselves about AI-powered legal research tools and be prepared for a future where AI will play a critical role in the industry.

The Bogus Cases: A Closer Look at the Fake Decisions

The six fabricated cases cited by Schwartz in his legal brief are as follows:

  1. Varghese v. China Southern Airlines
  2. Shaboon v. Egyptair
  3. Petersen v. Iran Air
  4. Martinez v. Delta Airlines
  5. Estate of Durden v. KLM Royal Dutch Airlines
  6. Miller v. United Airlines

These cases not only had made-up names but also included “excerpts” from the bogus decisions, complete with nonexistent quotes and internal citations. Judge Castel pointed out the glaring deficiencies in these cases, emphasizing the unprecedented nature of the situation.

The Hearing: Possible Sanctions and Consequences

Judge Castel has scheduled a hearing for Schwartz, fellow attorney Peter LoDuca, and their law firm to show cause for why they should not be sanctioned. With the legal community watching closely, the outcome of this hearing may set a precedent for future cases where AI-generated information is used in legal proceedings. The ramifications of this case will likely resonate throughout the legal industry and serve as a stark reminder of the importance of verifying AI-generated information.

The Road Ahead: Navigating the Intersection of AI and Law

As AI continues to evolve and find its way into various industries, including the legal field, professionals must strike a balance between embracing the technology’s potential benefits and maintaining a healthy skepticism regarding its reliability. The case of Steven Schwartz and the ChatGPT-generated fake cases serves as a cautionary tale, illustrating the importance of independent verification and a measured approach to incorporating AI into legal practice.

Opportunities for Growth: Improving AI’s Reliability in Legal Work

The ChatGPT incident highlights the need for AI developers and legal tech companies to work together to improve the reliability of AI-powered tools in legal research and practice. By refining their algorithms, implementing safeguards against the generation of false information, and providing transparent information about accuracy rates, companies can help ensure that AI becomes a valuable resource for legal professionals.

Education and Adaptation: Preparing Lawyers for an AI-Integrated Future

As AI becomes increasingly integrated into the legal industry, it is crucial for lawyers to educate themselves about the technology and its potential applications. By staying informed about the latest AI tools and their capabilities, attorneys can better assess the risks and rewards of incorporating AI into their practice and make informed decisions about how to use AI responsibly and effectively.

In Conclusion: A Lesson in AI and Legal Practice

The story of Steven Schwartz and the ChatGPT-generated fake cases serves as a cautionary tale for lawyers and legal professionals. While AI has the potential to revolutionize the legal field, its integration into legal practice must be approached with caution and a commitment to independent verification. By learning from this incident and adapting to the evolving landscape of AI in the legal industry, attorneys can harness the power of AI to improve their practice while avoiding the pitfalls that come with overreliance on unverified AI-generated information.

Legal Scroll Contribute

 


 

Author

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top