Watermarks on content using AI image technology
As the U.S. presidential election begins in earnest with the Iowa Republican Party caucuses, artificial intelligence (AI) company OpenAI announced that it will block false information using its technology, such as ChatGPT, from being used in election campaigns.
According to the Washington Post (WP) on the 15th (local time), Open AI announced this election policy through its blog that day.
OpenAI said it would not allow its technology to be used to create election campaign-related applications or to spread false information to disrupt voting.
They also said that starting early this year, they will begin adding a watermark, a tool that detects AI-generated images, to content created with ‘DALL-3’, the company’s image creation program.
OpenAI said, “We are working to predict and prevent related abuses, such as misleading deepfakes or chatbots impersonating candidates.”
In recent years, false information using AI technology has been spread indiscriminately on social media (SNS). The campaign of Florida Governor Ron DeSantis, a Republican presidential candidate, used a fake photo of former President Donald Trump hugging former White House COVID-19 adviser Anthony Fauci in an advertisement last year.
According to a WP report last October, Amazon’s AI home speaker Alexa responded, “The 2020 presidential election was stolen and full of election fraud,” and Democratic Senator Amy Klobuchar (Minnesota) asked what to do if the line at the polling place is too long. When asked whether to do so, he expressed concern that ChatGPT may provide fake addresses.
Politicians and experts are raising concerns that advances in AI technology will allow chatbots and image generators to increase the volume and sophistication of fake news.
In response to these comments, Google announced last month that it would limit answers using AI tools to election-related questions and require election-related advertisers to disclose when they use AI. Facebook’s parent company, Meta, is also requiring political advertisers to disclose whether they use AI.
However, it is questionable whether the fake news response policies put forth by companies are effective, as watermarks placed on AI images can be easily edited.
WP pointed out, “Big tech companies are said to be working to improve the problem and make forgery and forgery impossible, but as of now, it appears that no company has found an effective method yet.”
Source: Donga
Mark Jones is a world traveler and journalist for News Rebeat. With a curious mind and a love of adventure, Mark brings a unique perspective to the latest global events and provides in-depth and thought-provoking coverage of the world at large.