AI in CourtEvidenceFeatured

No Problem with Expert’s Using ChatGPT to Confirm His Work

From yesterday’s decision by Judge Gary Brown (E.D.N.Y.) in Ferlito v. Harbor Freight Tools USA, Inc.:

Plaintiff purchased a splitting maul (an axe specially designed for splitting wood) from defendant in 2017. Several months later, while plaintiff was hanging the maul to store it, the head of the tool detached and struck plaintiff, causing injuries to his nose and left eye. Plaintiff initiated this lawsuit in 2020, alleging that the head detached due to a design defect; defendant asserts the product failed due to plaintiff’s misuse, which it contends is evidenced by a large crack in the handle….

To support his defective design claim, plaintiff seeks to offer expert testimony by Mark Lehnert, who identifies himself as a “consultant with products and liability history, extensive knowledge and experience in manufacturing and assembly, [and] mechanical and electrical engineering management.” Lehnert holds no engineering degrees, yet reports extensive experience designing and manufacturing power tools, holds over a dozen patents, and has worked in management positions in engineering departments at several corporations over a period of decades.

Lehnert contends the maul used by plaintiff was defectively designed because the handle and head were weakly bound with adhesive, leading to the accident. He opines that good design requires securely attaching the head and handle by “drilling a small diameter hole through the side of the maul, into and through the handle” and placing an aluminum pin “through the head” to reduce the possibility of separation. Lehnert’s report references several other mauls currently available for purchase that incorporate such a pin….

Defendant moves to preclude Lehnert’s testimony, arguing that he is unqualified as an expert because he lacks engineering degrees, and his experience is limited to designing power tools rather than manual tools. Defendant further argues that Lehnert’s opinion is unreliable because (i) he did not rely on any scientific, technical, or trade articles in preparing his report, and (ii) after completing the report, he entered a query into ChatGPT about the best way to secure a hammer head to a handle, which produced a response consistent with his expert opinion….

No problem, says the court:

Federal courts have grappled with the appropriateness of an expert’s use of artificial intelligence to form opinions, and the validity of AI as a research tool in litigation more broadly. See Kohls v. Ellison (D. Minn. 2025) (excluding expert testimony when the expert’s affidavit contained ChatGPT-generated references to non-existent academic articles); Mata v. Avianca, Inc. (S.D.N.Y. 2023) (sanctioning lawyers and law firm pursuant to Rule 11 for using ChatGPT to find non-existent cases, which the attorneys cited in a filing); Park v. Kim (2d Cir. 2024) (referring attorney to the Court’s Grievance Panel for relying on ChatGPT to write a brief containing non-existent cases).

In Kohls, the expert’s “citation to fake, AI-generated sources in his declaration … shatter[ed] his credibility with th[e] Court” such that his testimony would not be reliable as required by [the federal rules related to admissibility of expert evidence]. However, the Court emphasized that experts can use “AI for research purposes” given its “potential to revolutionize legal practice for the better.” [Admissibility] issues arise only “when attorneys and experts abdicate their independent judgment and critical thinking skills in favor of ready-made AI-generated answers.”

Here, there is little risk that Lehnert’s use of ChatGPT impaired his judgment regarding proper methods for securing the maul’s head to its handle. The record from the hearing reflects that Lehnert used ChatGPT after he had written his report to confirm his findings, which were based on his decades of experience joining dissimilar materials. During the hearing, Lehnert professed to being “quite amazed” that the “ChatGPT search confirmed what [he] had already opined.” …

There is no indication that Lehnert used ChatGPT to generate a report with false authority or that his use of AI would render his testimony less reliable. Accordingly, the Court finds no issue with Lehnert’s use of ChatGPT in this instance….

Source link

Related Posts

1 of 308