No menu items!

Elon Musk and more than a thousand experts ask to pause the progress of artificial intelligence: “There are great risks for humanity”

Share This Post

- Advertisement -

Elon Musk and more than 1,000 global experts signed an appeal on Wednesday for a six-month break on the search for artificial intelligence (AI) more powerful than ChatGPT 4, the OpenAI model released this month, warning of “great risks for humanity”.

- Advertisement -

In the petition posted on the futureoflife.org website, they are calling for a moratorium until security systems are established with new regulators, AI systems surveillance, techniques that help distinguish between the real and the artificialand institutions capable of coping with the “dramatic economic and political upheaval (especially for democracy) that AI will cause”.

It is signed by personalities who have expressed their fears about an uncontrollable artificial intelligence that surpasses humans, such as Musk, owner of Twitter and founder of SpaceX and Tesla, and historian Yuval Noah Hariri.

- Advertisement -

The director of Open AI, who designed ChatGPT, Sam Altman, acknowledged that he has it “A little bit of fear” that its creation is being used for “large-scale disinformation or cyber-attacks”.

“The company needs time to adjust,” he recently told ABCNews.

“In recent months, we have seen AI labs launch into a headlong race to develop and deploy ever more powerful digital brains that no one, not even their creators, can reliably understand, predict or control,” they say.

“Should we allow machines to flood our information channels with propaganda and lies? Should we automate all jobs, including rewarding ones? (…) Should we risk losing control of our civilization? These decisions should not be delegated to unelected technology leaders“, they concluded.

Signatories include Apple co-founder Steve Wozniak, members of Google’s DeepMind AI lab, Stability AI director Emad Mostaque, as well as US AI experts, academics and executive engineers from OpenAI partner Microsoft.

Here is the full manifesto for stopping progress in AI:

Suspend Giant AI Experiments: An Open Letter

FILE PHOTO: The OpenAI and ChatGPT logos are seen in this illustration taken February 3, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

FILE PHOTO: The OpenAI and ChatGPT logos are seen in this illustration taken February 3, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

AI systems with human-competitive intelligence can pose serious risks to society and humanity, as extensive research shows [1] and recognized by leading artificial intelligence laboratories. [2] As set out in the widely approved Asilomar AI Principles, advanced AI could represent a profound change in the history of life on Earth, and must be planned and managed with care and resources.

Unfortunately, this level of planning and management isn’t happening, despite the fact that in recent months AI labs have gotten into a runaway race to develop and deploy digital minds ever more powerful than anyone, not even their creators, can fathom. . predict or control reliably.

Contemporary AI systems are now becoming competitive with humans in general tasks, [3] and we must ask ourselves: Should we let machines flood our news outlets with propaganda and falsehoods? Should we automate all jobs, including compliance? Should we develop non-human minds that could eventually outnumber us, outwit us, and replace us? Should we risk losing control of our civilization? Such decisions should not be delegated to unelected technology leaders.

THE Powerful AI systems should only be developed after it is certain that their effects will be positive and their risks manageable.. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement on artificial general intelligence states that “at some point it may be important to get an independent review before you start training future systems, and for more advanced efforts, agree to limit the growth rate of the computation used to build new modelsWe agree. That point is now.

Therefore, we encourage all AI labs to do so immediately suspend training of AI systems more powerful than GPT-4 for at least 6 months . This pause must be public and verifiable and include all key stakeholders. If such a pause cannot be implemented quickly, governments should step in and institute a moratorium.

AI labs and independent experts should take advantage of this lull to jointly develop and implement a set of shared security protocols for advanced AI design and development, rigorously tested and monitored by independent third-party experts. These protocols must ensure that the systems that adhere to them are secure beyond a reasonable doubt. [4] This does not mean a pause in the development of AI in general, but only a step back from the perilous race towards increasingly large and unpredictable black box models with emerging capabilities.

AI research and development must be refocused to make today’s powerful next-generation systems more accurate, secure, interpretable, transparent, robust, aligned, reliable and loyal.

In parallel, AI developers must collaborate with policy makers to dramatically accelerate the development of robust AI governance systems. These should include at a minimum: new and capable regulators dedicated to AI; monitoring and monitoring of high-capacity artificial intelligence systems and large sets of computing power; provenance systems and watermarks to help distinguish real from synthetic fugues and trace patterns; a robust audit and certification ecosystem; liability for damages caused by AI; robust public funding for technical research on AI security; and institutions well equipped to deal with the dramatic economic and political upheavals (especially in democracy) that AI will cause.

Humanity can enjoy a thriving future with AI. Having succeeded in building powerful AI systems, we can now enjoy an “AI summer” where we reap the rewards, design these systems for the clear benefit of all, and give society a chance to adapt. The company stalled on other technologies with potentially catastrophic effects on society. [5] We can do it here. Let’s enjoy a long summer of AI, let’s not rush to be caught unprepared.

Notes of letters and references

[1] Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (March 2021). On the Dangers of Stochastic Parrots: Can Language Patterns Be Too Big?????. In Proceedings of the 2021 ACM Conference on Equity, Accountability and Transparency (pp. 610-623).

Bostrom, N. (2016). Superintelligence. The Oxford University Press.

Bucknall, BS and Dori-Hacohen, S. (July 2022). Current and short-term AI as a possible existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129).

Carlsmith, J. (2022). Is power-seeking AI an existential risk? . arXiv preprint arXiv:2206.13353.

Christian, B. (2020). The Alignment Problem: Machine Learning and Human Values. Norton and company.

Cohen, M. et al. (2022). Advanced artificial agents intervene in the provision of the reward. Journal AI, 43(3) (pp. 282-293).

Eloundou, T., et al. (2023). GPTs are GPTs: a first look at the potential labor market impact of major language models.

Hendrycks, D. & Mazeika, M. (2022). Risk X analysis for AI research. arXiv preprint arXiv:2206.05862.

Ong, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.

Russell, S. (2019). Compatible with humans: artificial intelligence and the control problem. Viking.

Tegmark, M. (2017). Life 3.0: being human in the age of artificial intelligence. Knopf.

Weidinger, L. and others (2021). Ethical and social risks of damaging linguistic models. arXiv preprint arXiv:2112.04359.

[2] Ordóñez, V. et al. (2023, March 16). OpenAI CEO Sam Altman Says AI Will Reshape Society, Acknowledges Risks: “That scares me a little.” ABC News.

Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis urges caution with AI. Time.

[3] Bubeck, S. et al. (2023). Sparks from artificial general intelligence: first experiments with GPT-4. arXiv:2303.12712.

Open AI (2023). GPT-4 technical report. arXiv:2303.08774.

[4] There is extensive legal precedent; for example, the widely adopted OECD AI Principles require that AI systems “work properly and do not pose an unreasonable risk to security”.

[5] Examples include human cloning, human germline modification, gain-of-function research, and eugenics.

Source: Clarin

- Advertisement -

Related Posts