On the 30th of last month (local time), U.S. President Joe Biden issued an administrative document that included mandatory AI safety assessments to prevent artificial intelligence (AI) from being used to produce weapons of mass destruction (WMD), such as nuclear or biological weapons. The order was signed. As the U.S.-China competition over AI hegemony intensifies, the plan is to establish a proactive blocking device to prevent AI from falling into the hands of enemy countries or terrorist groups and posing a fatal threat to U.S. national security. The Biden administration made clear its intention to control AI as a core national strategic technology by proposing strong AI regulations that even applied the Defense Materials Production Act, which allowed the government to intervene in private companies during the Korean War.
The executive order ‘Development and Use of Safe and Trustworthy AI’ signed by President Biden on this day is the US government’s first regulatory measure on AI. This administrative order contains regulations on all fields, from AI training such as machine learning to development, production, and services. The White House emphasized that it is “the most comprehensive measure to protect Americans from the potential risks of AI systems.”
The administrative order mandates the formation of a ‘Red Team’ that finds vulnerabilities in advance when developing AI and reports safety test results to the government. To this end, the U.S. Department of Energy has decided to prepare testing guidelines within nine months, including developing an AI model evaluation tool.
In particular, in relation to AI testing guidelines, the executive order stated, “At a minimum, we must develop an evaluation tool that can determine how much of a threat it could pose to nuclear, biological, and biological weapons, critical infrastructure, and energy security.” Since commercially developed AI can be used for military purposes such as weapons production at any time, the purpose is to check from the AI development stage whether there is a possibility of producing WMD such as nuclear weapons, biological weapons, or conducting cyber attacks on key infrastructure such as the U.S. power, communication, and transportation networks. all.
It also requires companies to report their development intentions, training plans, and cybersecurity measures to the Department of Commerce when developing new AI models. In addition, the National Security Council (NSC) is asked to prepare and submit an ‘AI National Security Report’ that includes the possibility of enemy countries using AI to pose a threat to the United States or its allies and ways the United States can use AI to strengthen national security. did.
It also included a requirement to attach a watermark to AI content to prevent the spread of fake information. At the signing ceremony on this day, President Biden said, “Scammers can record your voice in 3 seconds (with AI) and imitate it,” and “I also watched some of my videos (made with AI) and thought, ‘When on earth did I say that?’ “There are times when I think,” he said.
President Biden also ordered the U.S. Copyright Office to prepare copyright guidelines related to AI learning within 180 days in an executive order. The goal is to prepare recommendations for copyright protection for creative works and news used in AI learning. In addition, it also contains provisions to support AI development and attract key talent in key fields such as military and medicine through the government budget.
It is analyzed that the fact that President Biden announced the executive order on this day ahead of the AI summit held in the UK on the 1st shows the United States’ will to take the lead in AI regulation. While Europe has decided to introduce AI regulation legislation within this year, seven major countries (G7) announced 11 codes of conduct on the 30th of last month, including requiring watermarks to be attached to AI content. President Biden emphasized, “As the challenge of AI is global, we will ensure that the United States continues to demonstrate leadership.”
The 11 codes recently announced by the G7 were created with the purpose of minimizing the risks associated with AI technology while leveraging its utility. This code states that ‘advanced AI should not be developed or introduced in a way that poses a significant risk to human rights’, ‘introduce an authentication mechanism to enable users to identify content generated by AI’, and ‘comply with international technical standards.’ It contains contents such as ‘development must be promoted appropriately’ and ‘personal information and intellectual property rights must be protected.’
The AI Summit held on the 1st will be attended by U.S. Vice President Kamala Harris, OpenAI CEO Sam Altman, and Microsoft CEO Satya Nadella. Among Korean companies, Samsung Electronics and Naver will attend the meeting.
Washington =
Source: Donga
Mark Jones is a world traveler and journalist for News Rebeat. With a curious mind and a love of adventure, Mark brings a unique perspective to the latest global events and provides in-depth and thought-provoking coverage of the world at large.