Another expert sounds the alarm over the advancement of artificial intelligence: ‘It will kill us all’

Share This Post

- Advertisement -

Last week, Elon Musk, Steve Wozniak and Yuval Harari, among hundreds of other experts, signed a petition to suspend the development and training of new artificial intelligences for six months. Now a renowned researcher who warns him life on Earth could be at risk due to advances in this technology.

- Advertisement -

This extreme position corresponds to Eliezer Yudkowski, an expert in artificial intelligence, head of the Machine Intelligence Research Institute. He is also convinced that the signatories of the Future of Life Institute (FLI) they fell short.

This expert has been studying the development of general AI and the dangers it poses since 2001 and is considered one of the founders of this research field.

- Advertisement -

To establish his position, he just published an editorial on Time where he claims that the FLI signatories are “asking too little” to resolve the impending threat and therefore extreme measures will have to be resorted to.

The fears that ChatGPT raises.  Photo REUTERS

The fears that ChatGPT raises. Photo REUTERS

“This 6-month moratorium would be better than nothing. I respect all those who have come forward [pero] I I have refrained from signing because I think the letter underestimates the seriousness of the situation and asks too little to solve it,” said the expert.

The request consists of an extension up to safe procedures, new regulators, surveillance of developments, techniques that help distinguish between real and artificial, and institutions capable of coping with the “dramatic economic and political upheaval that AI will cause” .

Yudkowsky indicates that humanity is inside uncharted ground whose limits are not yet known.

“We cannot calculate in advance what will happen and when, and currently it seems possible to imagine that a research laboratory could overstep the limits critics without knowing.”

With an apocalyptic tinge, he anticipates that “not even in the short term are we on track to be significantly better prepared. If we continue like this we will all dieincluding children who didn’t choose it and did nothing wrong.

Yudkowsky also talks about not having a clue about how to determine whether the systems artificial intelligence They are aware of themselves because “we don’t know how they think and develop their responses”.

AI: Possible solutions

Eliezer Yudkowsky with an urgent request.

Eliezer Yudkowsky with an urgent request.

Yudkowsky’s proposal is brief in its wording, though forceful in scope. the only way out is totally stop training future AIs.

His position is more than clear “We are not trained to survive a super AI”. To meet this threat, joint planning is needed.

Yudkowsky recalls that it took more than 60 years from the inception of this discipline to get to this point and it could take another 30 years to reach the required preparation.

Faced with such an AI and in the current situation, the fight would be futile. It would be “as if the eleventh century was trying to fight the twenty-first century.” That’s why he proposes that: The moratorium on new complex AIs should be indefinite and worldwide, with no exceptions for governments and militaries.

We need to shut down all large GPU clusters and track down and destroy any GPUs that have already been sold. Again, no exceptions for the government and the military.

Source: Clarin

- Advertisement -

Related Posts

What are the benefits of papaya and flaxseed smoothie?

There are some combinations of vegetables and seeds that...

All the shadows of ISIS-K and the broken narrative of Vladimir Putin

Russian autocrat Vladimir Putin not necessarily improvise when he...