A Google computer engineer working on a chatbot capable of supporting discussions on various topics was suspended by his employer after he said this artificial intelligence (AI) was conscientious.
This scenario, which has inspired many works of science fiction including 2001, a space odyssey and its deadly HAL 9000 on-board computer, appeared in the news following an interview with Washington Post (New window) with the engineer, Blake Lemoine.
This one’s command, initially, was to talk to the robot, named LaMDA for “ Language Model for Dialogue Applications “Where language model for dialog functions to test whether it engages in discrimination or hate speech.
A communication robot like this learns by imitation, and eats up the billions of words, text and discussions available on the Internet, Wikipedia or elsewhere online.
But the engineer was confused by some of the robot’s responses. When asked what kind of things he feared, LaMDA said he had a deep fear to dissipate . “It’s going to be exactly like death for me, the robot wrote, according to what the report says Washington Post. I would be so scared of it. “
Denied by Google
Blake Lemoine shared his thoughts in a document sent to Google executives who compiled his conversations with LaMDA.
However, these people are not convinced. Our team – including ethics specialists and technologists – reviewed the concerns Blake raised about following our AI principles, and advised him that the evidence did not support what he said. said a Google spokesperson on Washington Post.
Google warns against the danger ofanthropomorphize [donner un aspect humain à une chose] chatbots. These systems mimic the kind of exchanges seen in millions of sentences, and can improvise on any fascinating subject. speaker’s argument.
Blake Lemoine, unhappy with this denial from his employer, contacted the media and politicians about it, which led to his suspension, for his violation of the company’s confidentiality rules.
He has since published his full text interview along with LaMDA on a blog, in which he attacks his boss, who he accuses of wanting to silence critics on the ethical level of his artificial intelligence technologies.
With information from the Washington Post
Radio Canada
Source: Radio-Canada