May 4, 2024

TechNewsInsight

Technology/Tech News – Get all the latest news on Technology, Gadgets with reviews, prices, features, highlights and specificatio

US Attorney: When AI backfires

US Attorney: When AI backfires

Status: 06/09/2023 1:27 PM

A New York lawyer had ChatGPT research precedents for a lawsuit. But artificial intelligence simply makes judgments. It’s not the only case where AI is negative.

The case didn’t seem complicated: A man is suing an airline for injuring his knee by a service cart on a plane. His attorney is seeking settlement provisions to support the lawsuit. And try chatbot ChatGPT. He also airs cases: “Petersen v. Iran Air” or “Martinez v. Delta Airlines.” The bot even provides them with file numbers.

But when the lawyer presents his application to the court, he gets out: the cases are fraudulent. A dangerous process, says Bruce Green, president of the Institute of Law and Ethics at Fordham University in New York. The judge in charge described the case as unprecedented. The legal community is upset. The plaintiff’s lawyer confirms under oath that he did not want to deceive the court, but relied on artificial intelligence.

Check out Artificial Intelligence Research

This was decidedly careless, perhaps even reckless, Green says: “The rules for lawyers here in the US are pretty clear: They have to be comfortable with the new technical tools they’re using and they have to be aware of the risks and pitfalls.”

Anyone who knows ChatGPT knows that they can also invent things. “If this lawyer knows how to use software in his research, he must be smart enough to know that the research being done by AI needs to be reviewed.”

Data protection is another problem

Some US judges are now calling for rules for the use of artificial intelligence in the US judicial system. Green also sees the danger: evidence using a chatbot can not only contain incorrect information. It may also violate the confidentiality that a lawyer requires to reassure his clients. “For example, with information that the customer does not want to disclose: if it is fed into the AI, it can publish it further.”

See also  Sports Today: Video guide developer struggles with use of its technology

In recent months, chatbots like ChatGPT have sparked many discussions about AI applications. Such programs are trained on the basis of huge amounts of data. Experts warn that the technology can also produce fake information.

Tips for eating disorders

Or even serious ones, like the chatbot used by the largest eating disorder nonprofit in the United States. NEDA, which is based in New York, has replaced about 200 employees on its helpline with a chatbot “Tessa” — developed by a team at Washington University School of Medicine in St. Louis. Tessa is trained to use therapeutic techniques for eating disorders. But those seeking help have faced surprises.

Sharon Maxwell, for example. She suffers from severe eating disorders: “And this chat show tells me to lose a pound or two a week and cut my caloric intake by up to 1,000 a day.” Three out of ten tips from the chatbot were related to diet. Tips that got me into an eating disorder spiral years ago. Maxwell says such mechanical tips are more dangerous than hers.

Artificial intelligence is not yet ready for therapeutic conversations

The activist alerted her followers on social media. Many have already had similar experiences. But NEDA has already responded: the organization noted that the current version of “Tessa” may have provided harmful informationAnd the You were not in the spirit of their show. Tessa has been withdrawn from circulation for the time being and is now under scrutiny again.

This applauds the team leader who developed the chatbot. said Ellen Fitzsimmons-Craft to ARD Studio New York: Artificial intelligence is not yet mature enough to be released on people with mental health issues. Which is exactly why Tessa was originally created without AI. A user company later added this component to the bot.

See also  The new BMW Vision Series.