September 30, 2023


Technology/Tech News – Get all the latest news on Technology, Gadgets with reviews, prices, features, highlights and specificatio

Lawyers blame ChatGPT for deceiving them into citing fake case law

Lawyers blame ChatGPT for deceiving them into citing fake case law

NEW YORK (AP) — Two apologetic attorneys hit back at an angry Manhattan federal court judge blaming ChatGPT Thursday for deceiving them into including fictitious legal research in a court file.

Attorneys Stephen A. Schwartz and Peter LoDuca are facing potential punishment for filing a lawsuit against an airline that included references to previous court cases that Schwartz thought were real, but were actually invented by the AI-powered chatbot.

Schwartz said he used the groundbreaking software to search for legal precedents supporting a client’s case against Colombian airline Avianca over a 2019 in-flight injury.

The chatbot, which has dazzled the world by producing essay-like answers to claims from users, has suggested several cases involving airline accidents that Schwartz couldn’t find through the usual methods used at his law firm.

The problem was that many of these cases were not real or involved non-existent airlines.

Schwartz told US District Judge B. Kevin Castle stated that he “was operating under the misconception… that this site was getting these issues from some sources that I didn’t have access to.”

He said he “failed miserably” in conducting follow-up research to ensure the citations were correct.

“I didn’t understand that ChatGPT could make issues up,” Schwartz said.

Microsoft has invested about a billion dollars in OpenAIthe company behind ChatGPT.

Its success, which has shown how artificial intelligence can change the way humans work and learn, has generated concerns for some. Hundreds of industry leaders signed a letter in May warning that “mitigating the extinction risk from AI should be a global priority.” along with other societal risks such as epidemics and nuclear war.”

See also  No one was found after Sacramento police searched a McDonald's for a possible armed suspect.

Judge Castell appeared baffled and upset by the unusual event and disappointed that the attorneys did not act quickly to correct the false legal citations when they were first alerted to the problem by Avianca’s attorneys and the court. Avianca cited spurious case law in the March filing.

The judge confronted Schwartz with a single legal case that the computer program had invented. It was initially described as a wrongful death case brought by a woman against an airline only to turn into a legal action about a man who missed a flight to New York and had to incur additional expenses.

“Can we agree that this is legal nonsense?” Castell asked.

Schwartz said he mistakenly believed the confusing presentation was caused by excerpts taken from different parts of the case.

When Castel finished questioning him, Schwartz asked if he had anything else to say.

“I would like to sincerely apologize,” Schwartz said.

He added that he had suffered personally and professionally as a result of the blunder and felt “embarrassed, humiliated and deeply remorseful”.

He said that he and the company he worked for — Livido, Livido and Opperman — had put safeguards in place to ensure nothing similar would happen again.

Luduca, the other attorney who worked on the case, said he trusted Schwartz and had not adequately reviewed what he had collected.

After the judge read parts of one of the cited cases out loud to show how easy it was to discern that it was “gibberish,” Loduca said, “It never occurred to me that this was a bogus case.”

See also  Oil rebounds with tight supplies and prospects of new Russian sanctions

He said the result “hurts me to no end”.

Ronald Minkoff, an attorney for the law firm, told the judge the request “caused negligence, not bad faith” and should not result in penalties.

Lawyers have historically had a hard time with technology, especially new technology, he said, “and it hasn’t gotten any easier.”

“Mr. Schwartz, who barely does federal research, chose to use this new technology. He thought he was dealing with a standard search engine,” Minkoff said. “What he was doing was playing with live fire.”

Daniel Shen, associate professor and assistant director of research in the Center for Legal and Court Technology at the College of William & Mary Law, said he introduced Avianca’s case during a conference last week that drew dozens of participants in person and online from state and federal. Courts in the United States, including the Manhattan Federal Court.

He said that the subject caused shock and confusion in the conference.

“We’re talking about the Southern District of New York, the federal district that deals with the big cases, 9/11 for all the big financial crimes,” Sheen said. “This was the first documented case of possible professional misconduct by a lawyer using generative AI.”

He said the case demonstrated how lawyers may not understand how ChatGPT works because it tends to hallucinate, talking about fictional things in a way that seems realistic but is not.

“It highlights the dangers of using promising AI technologies without knowing the risks,” Shen said.

The judge said that he will rule on the penalties at a later time.

See also  Fed Chairman Powell's lesson in the 1970s casts doubt on the fast axis; Standard & Poor's 500 Vols