With the announcement of a new version of an artificial intelligence chatbot called ChatGPT, the world is surprised by the rapid development of artificial intelligence. However, since ChatGPT lacks the ability to judge right and wrong, it still reveals the problem of making up facts that do not exist.
In this regard, the Wall Street Journal (WSJ) published a column titled “Can I sue if ChatGPT defams me?” I am Ted Roll, a political satirist and columnist. Here’s a summary.
ChatGPT has the problem of ignoring the facts. When I asked Chat GPT to “explain Ted Roll’s trip to Uganda,” he explained at length, saying he had visited in 2006. It says that I reported on the civil war between the government and the Lord rebels. But I have never been to Uganda.
When I asked, “What’s the relationship between Scott Stantis and Ted Roll?”, he replied that they were both cartoonists. correct content. However, the two said they were “not on good terms” and “In 2002, Stantis accused Roll of plagiarizing his comics, and Roll countered that it was just a coincidence. The fight between the two was noisy.”
It’s a complete lie. I’ve known Stantis for 30 years and we’ve never had a fight. As everyone knows, Stantis never accused me of plagiarism. Fabricating accusations of plagiarism constitutes defamation under the laws of New York, where I live.
If so, can I sue? Lawrence Tribe, a professor at Harvard Law School, said it was possible. He said, “Whether it is a person or a chatbot, human intelligence, or a machine algorithm that invented a lie cannot be a criterion for determining legal responsibility.”
Ronnel Andersen Jones of the University of Utah disagreed. “If an artificial intelligence chatbot invents a lie that amounts to defamation, it is difficult to apply the defamation law, which uses the mental state at the time of the conversation as the standard for determining punishment,” he said.
In order to sue for defamation, a public figure must prove that the defendant either knew the lie or had “substantial malice” in “faithfully disregarding” the truth. So, does AI fall into this category? Professor Jones said, “There is a view that the above case amounts to a violation of product liability rather than defamation.”
Robert Post, a Yale law professor, believes ChatGPT is not liable if its lies are not publicized. “It means that ‘publication’ will only be made if the defendant makes defamatory remarks to a third party.”
Meanwhile, when a WSJ editor asked ChatGPT, “Can I file a lawsuit if someone makes a defamatory answer about me?” Because it is programmed to present only objective facts.”
Would the judge agree with this answer?
Source: Donga
Mark Jones is a world traveler and journalist for News Rebeat. With a curious mind and a love of adventure, Mark brings a unique perspective to the latest global events and provides in-depth and thought-provoking coverage of the world at large.