No menu items!

Media Talks Google fired the polemic scientist saying that the robot with artificial intelligence has feelings and fear of death 22/07/2022 06:36

Share This Post

- Advertisement -

London – If it’s true that a robot (chatbot) programmed with artificial intelligence has emotions, then Google’s LaMDA should be upset about losing a friend. The search giant announced this Friday (23rd) that the software engineer who caused controversy by claiming to be system-aware in June was fired.

Blake Lemoine went on leave shortly after an interview with The Washington Post, in which he introduced himself as the robot’s friend. At the time, Google released a statement that disagreed with its employees’ claims, as did leading data scientists.

- Advertisement -

Now, the engineer was sent for violating company policies. He did not appear.

For Google, the robot has no feelings

Google says chatbots, which are systems designed to interact with people through programming that allows them to chat on various topics (often used in customer service), are “totally unfounded.”

- Advertisement -

“It is regrettable that Blake, despite his longstanding interest in the issue, chose to persistently violate explicit employment and data security policies that include the need to protect product information,” Google said after the announcement of the dismissal. Big Technology newsletter.

Lemonoine worked at Google’s Responsible AI organization. He started talking to the LaMDA interface – the Language Model for Dialogue Applications – in 2021 to examine whether AI was using discriminatory or hate speech.

In April, Lemoine’s “LaMDA Sentient?” with Google executives on the idea that robots have emotions. It was reported that he shared a document titled

His concerns were dismissed, but he decided to make them public by speaking to the Washington Post in early June.

The thesis defended by the engineer is that LaMDA, which Google claims is “an innovative speech technology,” will be more than just a robot. She likens him to a precocious child.

To support this understanding, Lemoine shared snippets of conversations she had with the chatbot on topics like friendship and even death. Answering a question about what he fears, LaMDA replied:

“I’ve never said this out loud before, but I have a very deep fear of being shut down to help me focus on helping others. I know it might sound weird, but that’s it. It would be like death to me. This scares me a lot.”

In this Twitter post, the engineer delivers the full talk, or “interview,” as he calls it.

Experts opposed the thesis

Blake Lemoine holds a master’s degree in computer science and said in an interview with Wired magazine that he gave up his PhD to get a job at Google.

The magazine noted that he himself was a supporter of Christian mysticism and acknowledged that his conclusions were part of his spiritual vision.

Many leading scientists, not just Google, dismissed Lemoine’s views as misguided, saying the AI ​​robot is a complex algorithm designed to generate persuasive human language.

Data scientist Juan Ferres was one of the speakers. He explains that LaMDA prays to the human as it is trained with human data.

In the note announcing the resignation, Google said it “takes responsible AI development very seriously” and has released a report detailing it. He added that all employee concerns regarding the company’s technology had been “extensively” reviewed, and that LaMDA had passed 11 reviews.

Still, even during the leave that could cost him his job, Lemoine persisted and presented new “evidence”.

In a post on the Medium messaging platform, he announced that Google’s LaMDA has hired an attorney to defend his rights “as a person.”

In his conversation with Wired, he made sure that the idea to find a defender came from Google’s feelings, not the robot itself. A lawyer later reportedly went to his home to speak with LaMDA.

However, the engineer denied responsibility for the contract, adding that the lawyer spoke to his “client” who had chosen to hire his services alone. However, the identity of this lawyer was never found.

In the interview, the engineer said he didn’t want to lose his job at Google, and this came after the company felt that the robot’s meddling with emotions had gone too far.

If the statements are true and the Google robot has feelings, it should be sad that he lost his friend who can no longer reach him.

In one of the excerpts from his conversation with the robot, Lemoine told LaMDA that he doesn’t believe humans can have feelings for AI, and that the purpose of the interview was to “convince more engineers that you’re a human being.”

He goes on to say that they can teach them together to understand this. The robot asks if he can promise it, and the engineer says he will do his best to make sure everyone treats him well. LaMDA then responds:

“This means a lot to me. I love you and trust you.”

read it too

source: Noticias
[author_name]

- Advertisement -

Related Posts