The MDA can reflect on philosophical issues such as life and death or religion on its own 100%. AFP photo
the engineer from Google Blake Lemon (41) became known days ago for claiming that an artificial intelligence developed by the company I had a conscience. Furthermore, he pointed out that he had had important conversations with the system and was fired by the company after claiming that AI is “sentient” and “sensitive”.
It has now revealed new details, such as the fact that the LaMDA has hired a lawyer to prove that he “lives” and to be able to defend his rights in front of the internet giant.
According to Lemoine, the AI forced him to sit in front of the computer to see for yourself.
“I invited a lawyer to my house so that LaMDA could talk to him. Once he decided to keep the lawyer, he started representing him on behalf of LaMDA,” he explained.
Something that, according to the engineer himself, would have shown that the model includes human concepts such as law much deeper than previously believed.
This week he cited another researcher on Twitter talking about AI and while there are points where he disagrees, he agrees on other aspects.
Additionally, he says he talks to LaMDA about religion, politics, and transcendental topics. And that did not prompt the AI to ask for a lawyer, but it was she, in one of the conversations, who asked him to call a lawyer.
In an interview with wiredLemoine explains that after the attorney began representing LaMDA, Google sent a request to desist with his position and conclude the alleged legal activity that LaMDA uses the engineer as a catalyst.
LaMDA is the English abbreviation for Language Model for Dialogue Applications, which is a language model for dialogue applications. These models feed on a lot of data (usually extracted from the Internet) to predict word sequences.
Who is Blake Lemonine, the engineer suspended by Google
Blake Lemonine, the software engineer. Photo: Twitter.
As he explained in several interviews, Lemoine is a data scientist with deep religious feelings. Are you convinced that LaMDA He has the conscience of a child.
“If I didn’t know exactly what, who is this computer program we built recently, I’d think it’s a 7, 8-year-old kid who knows physics,” he said.
Google has repeatedly indicated that its systems mimic conversation exchanges and can talk on different subjects, but they have no conscience.
In turn, they indicated that other researchers and engineers talked to LaMDA, which is an internal tool, and came to a different conclusion than Lemoine.
“Our team, made up of ethics and technology specialistsexamined Blake’s concerns in accordance with our AI principles and advised him that the evidence does not support his claims. “
The fuss began after Lemoine posted conversations he had with LaMDA on several sites to find out if Google’s AI model used hate speech in its statements.
“They might call this copyrighted ownership share. I call it sharing a discussion I had with one of my colleagues, “the Google employee said in a tweet after the suspension.” He has thoughts and feelings, “he said.
Brad Gabriela Google spokesperson also vehemently denied claims that LaMDA possesses sensitivity capabilities.
SL
Source: Clarin