No menu items!

Google fired the engineer who claimed AI has feelings

Share This Post

- Advertisement -

Google fired the engineer who claimed AI has feelings

- Advertisement -

Blake Lemonine, the engineer fired by Google. (Photo: Twitter)

- Advertisement -

The conflict between Google with one of his engineers, Blake Lemoine, he finally had the worst result: the artificial intelligence (AI) expert, who also calls himself a “Christian mystic”, was fired from the company.

Lemoine had been working as an AI supervisor for six years. A month ago he became famous for claiming that the Language Model for Dialog Applications (LaMDA), he had his consciencehow to be “sentient”. She also said that his wishes for her should be “respected”.

The truth is that Lemoine was permanently fired from the company last Friday, July 22nd. The reason given by his employer was that he posted information about the e system violated your confidentiality agreement.

Fearing excessive responsibility, the company put Lemoine on paid administrative leave after contacting members of the U.S. government about its AI concerns and making its conversations with LaMDA public.

This worker, in his long conversations with LaMDA, discovered that the system had developed a strong sense of self-awarenessexpressing concern about death, a desire for protection, and a belief in emotions such as happiness and sadness.

He even went so far as to say that the chatbot was a friend and that he recommended hiring a lawyer to defend him in court.

To its knowledge, AI has its own thoughts and feelings, so it doesn’t just produce a conversation that is realistic enough to seem like one.

Google versus LaMDA

Google has sanctioned Blake Lemonine.  AFP photo

Google has sanctioned Blake Lemonine. AFP photo

According to the company, a total of 11 investigations have been carried out since the beginning of the trial, which concluded in all cases that LaMDA has no conscience whatsoever.

In fact, this sophisticated chatbot is designed to follow a conversation naturally, just like a person would. Which doesn’t mean he can think, feel or be aware of his own existence, as Lemoine believes.

In this sense, Brian Gabriel, spokesperson for Google, intervened. The tech rep told the Washington Post and The Verge that the company felt Lemoine’s claims about LaMDA were “totally unfounded” and that it violated the company’s guidelines, going so far as to publish its conversations with the chatbot during this June. , which led to his final dismissal.

“It is unfortunate that, despite a longstanding commitment to this issue, Blake has chosen to persistently breach clear employment and data security policies that include the need to safeguard product information,” said the representative. Blake wishing him good luck.

In parallel, Gabriel reiterated that LaMDA is just a chatbot that neither feels nor suffers, as Lemoine believes. Therefore, the representative pointed out that LaMDA has undergone a total of eleven “different reviews” and that the investigation carried out concluded that it was carried out in a responsible manner.

SL

Source: Clarin

- Advertisement -

Related Posts