May 27, 2023
April 27, 2022
September 25, 2021
January 20, 2021
October 17, 2020
Artificial Intelligence (emphasis on “Artificial”)
One of the hot topics in legal circles—as it seems to be in a whole lot of circles—is artificial intelligence and, in particular, Chat GPT. I can tell you that I’m, at best, a caveman when it comes to this stuff. I’m proficient enough at using computers, but I just haven’t bothered much learning how to write briefs and motions and things using artificial intelligence, which I’m assured is the future of the profession. The truth is that I like writing, and it’s what I get paid to do. I feel like I’d be cheating if I turned to some artificial source. But there are other good reasons why lawyers should avoid Chat GPT or at least strongly question its output.
A series of orders coming out of one case currently pending in the federal court for the Southern District of New York provides ample reason for caution. The case is Roberto Mata v. Avianca, Inc., No. 22-cv-1461, and it’s a personal injury suit where an airline employee purportedly injured the claimant. For purposes of this post, the facts don’t really matter. Avianca, Inc., the airline in question, filed a motion to dismiss the case under Rule 12(b)(6) of the Federal Rules of Civil Procedure. Rule 12(b)(6) provides a defendant a relatively quick and easy way to get a case against it dismissed if the claimant fails to state a cause of action and facts that, if believed, will back it up; that is, if there’s just no “there” there, the court will dismiss the case.
Mata’s lawyers—Peter LoDuca and Steven Schwartz of the firm of Levidow, Levidow & Oberman (at least, for now)—filed a Response in Opposition to Avianca’s Motion and backed it up with citations to a slew of caselaw precedents from various courts. There was just one problem with those citations. In all but a couple of instances, the cases they cited didn’t exist.
You see, apparently the original lawsuit was filed in New York state court by Schwartz. Avianca removed the case to federal court, a fairly typical and not-at-all-unprecedented move by a defendant, particularly an airline. Because Schwartz wasn’t admitted to the local federal court, he brought in LoDuca—who was admitted—to sign the papers while Schwartz continued to do the legal work, like responding to Avianca’s motion. But instead of researching the law and writing a response in the old-fashioned way, Schwartz turned to Chat GPT, which not only drafted a response for him but provided citations that the law was just what his client Mata needed it to be. And that’s because Chat GPT made them up. But not just the citations, mind you. Chat GPT invented the names of the cases, their location in caselaw reporters, and the complete text of supposed appellate opinions. They look like appellate opinions. They read, mostly, like appellate opinions. But they’re complete fabrications and neither Schwartz nor LoDuca bothered to check and see if they were for real.
In affidavits to the court, both Schwartz and LoDuca fell on their swords (like there was anything else to do?). Schwartz, in the understatement of the year, admitted being a novice Chat GPT user, and LoDuca admitted that he was basically acting as a sock puppet for Schwartz in a court where Schwartz wasn’t admitted. Judge P. Kevin Castel’s reaction once all this came out was, how to put this . . . . Outraged? Incensed? Incandescent? (Maybe I should ask Chat GPT.) Suffice to say, Judge Castel was pretty mad and with good reason. You don’t misrepresent the law to a federal court (or any court for that matter), you don’t fail to check the veracity of your sources before representing it as the law, and you don’t let someone else use your admission to a court where they aren’t admitted. The icing on the cake was that both Schwartz and LoDuca used “false and fraudulent notarization[s]” on their affidavits, further incurring Judge Castel’s wrath, quite likely committing an actual knowing fraud on the court, and compounding their problems immensely.
So Schwartz’s and LoDuca’s case definitely gives a primer on how not to use Chat GPT, and it should also serve as a big screaming “caution” sign for any lawyer considering its use. I don’t know enough about artificial intelligence to say how much or how little reliance lawyers should place on aids like Chat GPT, but my take for now is “exceedingly little.” Computers probably aren’t highly schooled in ethics, but even a first-year law student understands that you don’t fabricate cases or lie to the court. Too bad there are apparently actual lawyers out there who don’t get that.