Guardian analysis… Lack of relevant standards and internal transparency
“Currently, immature AI without standards, certification, or professional organizations.”
“There are no real regulations on development companies, so it relies on the individuality of the developer.”
With the return of Sam Altman, the Open AI incident that caused a stir in the American technology industry has come to an end, but some say that this incident has revealed that there are no standards related to artificial intelligence (AI) technology and that the technology is highly dependent on a few people. AI technology is dangerous enough to threaten humanity, and the lack of related standards and internal transparency was revealed in this incident.
On the 23rd (local time), the British Guardian reported that the conflict within Open AI has resulted in various challenges related to the safety of AI technology.
Reid Ghani, a professor of machine learning and public policy at Carnegie Mellon University in the United States, said regarding this open AI situation, “The AI we are seeing now is immature AI. “There are no standards, no professional organizations, and no certification,” he pointed out. He added, “The AI that is created relies on a small number of people creating it, and the impact of this small number of people is disproportionate.”
The dismissal of OpenAI’s Sam Altman was known as a unilateral announcement by the board of directors on the 17th. Even Microsoft (MS), the largest shareholder, which invested $13 billion in Open AI, was not aware of it.
Afterwards, a confusing situation ensued, and the process of Altman’s return unfolded according to the spontaneous movements of the people involved rather than through certain procedures. More than 95% of the approximately 750 OpenAI employees signed a letter of agreement saying, ‘If Altman does not return, I will resign and join Microsoft,’ which had a significant impact on Altman’s return.
The Guardian noted that in a situation where there is no actual regulation of companies developing AI, the individuality of AI technology developers becomes overly important.
It is also mentioned that this Open AI incident showed the lack of transparency in AI companies. The development of the latest AI is in the hands of a small group of executives who operate behind closed doors.
Professor Ghani pointed out, “We have no idea how the change of OpenAI’s board of directors will change the nature of ChatGPT or Dall-E (image creation AI).” “Currently, there is no public agency testing programs like ChatGPT, and companies are not transparent about updates,” he said, adding, “Compare this to software updates on iPhone or Android.”
Although the Open AI incident has been resolved with Altman’s return, it is still unclear why Altman was in conflict with the board of directors.
Previously, there were continued mentions that the board of directors and Altman had differing opinions on the risks and profitability of AI. However, on this day, the Financial Times (FT) quoted an official with direct knowledge of Altman and the board of directors and reported that the board’s decision to fire Altman did not come from concerns about progress in AI research. The official also spoke about the loss of trust between the board and Altman.
“The fight for control of Open AI could affect the volatility of the relatively immature field and key decisions about how to protect AI systems, as companies struggle for power,” said Paul Barrett, associate director of the Center for Business and Human Rights at New York University’s Graduate School of Business. “It reminds us of the danger that exists,” he said.
He added, “Judgments about when it is safe to release unpredictable AI systems to the public should not be controlled by these factors.”
Source: Donga
Mark Jones is a world traveler and journalist for News Rebeat. With a curious mind and a love of adventure, Mark brings a unique perspective to the latest global events and provides in-depth and thought-provoking coverage of the world at large.