No menu items!

An American lawyer with 30 years of experience also fell victim to ChatGPT fake information

Share This Post

- Advertisement -
Donga Ilbo DB

An American veteran lawyer with 30 years of experience is facing sanctions for citing a precedent using generative artificial intelligence (AI) ‘ChatGPT’ while submitting documents to the court. This is because the precedent cited through ChatGPT turned out to be ‘false’ that does not actually exist. Amidst the side effects of various false information created by AI, it is pointed out that it shows that professionals should also pay great attention to the ethical use of AI.

According to CNN on the 28th, Kevin Castell, a federal district court judge in Manhattan, New York, announced that he would hold a hearing on the 8th of next month to discuss sanctions against Steven Schwartz, a lawyer with 30 years of experience, who submitted documents containing many false precedents to the court. “The documents filed by Schwartz’s lawyers were full of falsified judicial decisions and false quotations,” Judge Castell said, calling the case “unprecedented.”

- Advertisement -

In August 2019, Attorney Schwartz defended Mr. Mata in the case of Mr. Roberto Mata, who used Colombia Avianca Airlines from El Salvador in Central and South America to New York. Mata recently filed a lawsuit against the airline, claiming that she injured her knee when she was hit by a steel cart carrying food on board.

The airline claimed that the statute of limitations (two years) had passed for ordinary aviation cases, but attorney Schwartz said that the statute of limitations had nothing to do with it and submitted a 10-page opinion document containing precedents on similar cases that occurred with other airlines such as Korean Air and Nanfang Airlines in China. . But it turned out that at least six of the precedents he cited were false.

- Advertisement -

As the controversy spread, lawyer Schwartz belatedly admitted on the 25th that he had “sought advice from ChatGPT to ‘supplement’ the work.” I repeatedly asked ChatGPT if the precedent was genuine, and ChatGPT answered ‘yes’ each time, claiming that there was no doubt about the authenticity.

Source: Donga

- Advertisement -

Related Posts