No menu items!

Sadosky Foundation Proposes to Regulate Artificial Intelligence: ‘It Will Replace Human Labor’

Share This Post

- Advertisement -

Fernando Schapachnik, executive director of the Sadosky Foundationconsidered “inevitable” that Artificial Intelligence (AI) programs. it will affect jobs and called for regulations.

- Advertisement -

In line with the letter published by Elon Musk this week, he insisted on regulatory measures for these platforms which have returned to the center of public debate no longer because of their ability to multitask or add millions of users but because hundreds of academics and administrators tech delegates asked pause in your search considering that they pose “great risks to humanity”.

In an interview with Télam, Schapachnik highlighted the questions raised by the founder of Tesla and Space X, Elon Musk, and by the Israeli historian and writer Yuval Noah Harari, among others, asking to analyze the risks of new technologies before placing them on the market. E market “Discussing the false dichotomy between innovation and regulation”.

- Advertisement -

The Sadosky Foundation is a public-private institution that promotes the articulation between the scientific-technological system and the productive structure in everything concerning the use of information and communication technologies (ICT).

In charge of the executive direction since mid-2021, Schapachnik is a professor in the Computer Science Department of the Faculty of Exact and Natural Sciences of the University of Buenos Aires, PhD in Computer Science and Researcher at the UBA Institute of Computer Science – CONICET.

─No need to be naive, there are very important academics, but the losers of the race have pulled out the letter in the world of technology. Anyway, ask for more regulation They argue that this opens up fundamental questions for humanity and say the answers to these questions cannot be left in the hands of unelected technology leaders. They recognize that the market cannot be what defines these transcendental issues for humanity because it would be undemocratic. The democratic thing would be that it’s the people we chooseThat’s what democracy is.

─The risk assessment of a technology should be done from the moment of conception and not once it has already been released. The question when developing a technology should be whether the benefits it will provide will justify the risks. A technology is valuable if it serves the whole of society, if all it does is enrich a handful of people, then it has no social value.

─The effect they have on the world of work can only concern us. The fact that they will replace human labor is inevitable, so the impact on inequality and unemployment must be a primary concern. The instrument has been subjected to a series of standardized tests, the ones you take to enter university, e seen the difference in performance compared to the original version of Chat GPT and the new version released 4 months later. This tool could qualify, metaphorically, for jobs that large parts of the population could not because they would not meet those qualifications.

─When these systems have no information, they are technically said to be “delusional”. If we focus on this we miss the important point as they obviously have a lot of prejudices but if it is taken at random even a citizen has prejudices and is wrong. If I ask him to do a task that a human does today—such as generating a rental agreement, answering a call center, initiating a sale, or writing a news article—he can do it well enough to put massive amounts of human employment at risk. .

The warning from Musk and other technocrats calls for a six-month suspension of the investigation.  Reuters photo

The warning from Musk and other technocrats calls for a six-month suspension of the investigation. Reuters photo

─I am a technology that allows the creation of images and videos; then we can no longer rely on our senses to distinguish fact from fiction. The problem of fake news is not new, but the difference is that previously the alleged reality was mediated and the lie appeared in that mediation. Now AI forces me to distrust something as basic as my senses. If I’m watching a video with my own eyes. How is it a lie? WhatHow is that a lie if I hear an audio with that person’s voice?

─It is time to discuss the false dichotomy between innovation and regulation. We need to think seriously about regulating these technologies. The pharmaceutical industry is heavily regulated as they cannot release a product before it is approved in every country with very strict standards and no one would characterize this industry as lacking innovation. It is false that more regulation is a barrier to innovation.

─It is an issue for international organizations to work on and countries should also develop national regulatory frameworks as a matter of sovereignty. The digital market is not used to regulations because it has been in the history of humanity very new and had lobbying power which has resisted all attempts at regulation. The potential for change and harm of AI-based technologies seems beyond question to me, it’s time to regulate them.

About the foundation

“The Dr. Manuel Sadosky Foundation is a public-private institution whose objective is to promote the articulation between the scientific-technological system and the productive structure in everything related to the Information and Communication Technologies (ICT) matter “, they explain from the official site.

“Established by National Delegated Decree n. 678/09, the Foundation is chaired by the Minister of Science, Technology and Innovation. Its vice-presidents are the Presidents of the most important Chambers of the ICT sector: CESSI (Chamber of Software and IT Services ) and CICOMRA (Cámara de Informática y Comunicaciones de la República Argentina).Since April 2011, it has an executive structure aimed at implementing various programs that favor this articulation“, they close.

With information from Télam (Iván Hojman)

Source: Clarin

- Advertisement -

Related Posts