No menu items!

ChatGPT creator and more than 300 experts warn that AI “represents an extinction risk”

Share This Post

- Advertisement -

A group of scientists and industry leaders in the artificial intelligence (AI) signed a worrying joint statement on Tuesday: “Mitigate the risk of extinction of artificial intelligence should be a global priority, along with other societal-scale risks such as pandemics and nuclear war”.

- Advertisement -

The declaration bears the signature of personalities such as Demis Hassabis, executive director of Google DeepMindDario Amodei of Anthropic and OpenAI founder Sam Altman, among others.

It also bears the signature of scientific heavyweights: Geoffrey Hinton -who has been dubbed the “godfather” of AI and spent part of his career at Google- and Youshua Bengio, two of the three researchers who won the Turing Award 2018 (the Nobel Prize in Computer Science) for his contributions to artificial intelligence.

- Advertisement -

The text was published on the website of the Center for IA Safety, a non-profit organization in San Francisco. It has only one sentence and doesn’t explain much: does not support why there would be a “Danger of Extinction” associated with artificial intelligence or because they make a comparison with pandemics and nuclear wars.

Geoffrey Hinton, a pioneer of

Geoffrey Hinton, an AI pioneer, left Google earlier this month to warn of the technology’s “dangers.” Photo archive

The declaration is made in a year in which the generative artificial intelligence is growing exponentially: ever since ChatGPT became popular for creating text and Midjourney or Stable Diffusion for images, everyone tech giants have started to develop their systems in this direction, as did Google with Bard or Microsoft with Copilot, both AI assistants to provide more accessible experiences to their users.

However, it is the second time this year that AI has been publicly and adamantly challenged. In March of this year, Elon Musk and more than 1,000 experts signed an order to suspend research into powerful AIs like GPT-4 for six months, warning of “great risks for humanity”.

After the letter, Musk insisted that artificial intelligence could “cause the destruction of civilization”. Subsequently, Bill Gates, founder of Microsoft, anticipated the disappearance of teachers. AND Warren Buffett, even the legendary investor – and friend of Gates – likened artificial intelligence to aa atomic bomb.

In the case of this new joint statement, the difference is that it explains very little and risks a catastrophic scenario without substantiating it.

What is “existential risk” and how real is it?

photo by AFP

photo by AFP

This type of approach responds to what is called in the field “existential risk” of artificial intelligence.

“The idea of ​​existential risk has to do with an ill-founded concept, implying that an intelligence higher than human could make the decision to extinguish humanity. Kinda goes along the lines of the movie Terminator and the Skynet showwho becomes aware of himself and decides to turn against humans”, explains a clarion Javier Blanco, PhD in Computer Science at the University of Eindhoven, the Netherlands.

“These technologies like opposing neural networks e machine learning they have no way of constituting something like this: it is a very basic technology, based on recognized statistical patterns. Generation systems like ChatGPT are the same but complementary -classification-based generative systems-, pose no risk of a type of intelligence that could be an existential threat to humanity,” he adds.

Adversarial neural networks generate new datasets from the opposition of different algorithms. He machine learningor machine learning, is a branch of artificial intelligence that programs techniques that produce computers “learn”: improve their performance with use (something very noticeable in ChatGPT, for example).

The idea of ​​the extermination of humanity is still a chimera for Blanco: “That this is a long-term risk is as probable as a giant asteroid falling and destroying the Earth: that some technology could lead to hybrid or artificial cognitive entities and that are interested in destroying the human race is a completely remote possibility,” he adds.

Now, for the expert and also professor at the Faculty of Mathematics, Astronomy, Physics and Computer Science of the UNC (Córdoba), yes, there are concrete risks with these technologies, which have nothing to do with the existential.

“There are professional risks, job losses. From facing certain evolutions that make pretending and deception much more feasible (fake news, disinformation): this is a fact and it is one of the problems. All of this has consequences that are difficult to measure today, but they are already having an impact on the social sphere: there is real concern there,” he warns.

It is also noted with regard to the concentration of these technologies in a small group of companies: “It is important to be able to distinguish real concerns and possible solutions – which do not necessarily coincide with what companies are looking for – from speculative concerns and unlikely in many of the near futures”.

“Also, in truth, unlike a pandemic or nuclear warAI development is not in the public sphere, any group can make great innovations in AI out of the screen of states or other organizations,” he says.

Therefore, the scenario in which AI technologies are developed is uncertain. “We believe the benefits of the tools we have developed so far far outweigh the risksAltman said in his statement to Congress.

Publications like Tuesday’s do not seem to support his perspective, in what crowns a paradoxical strategy: those who are developing the most powerful artificial intelligence tools are those who sign a declaration warning against the possible extermination of humanity.

Source: Clarin

- Advertisement -

Related Posts