No menu items!

The dangers of Artificial Intelligence in the upcoming elections in the United States

Share This Post

- Advertisement -

Technologically inclined computer engineers and political scientists they have warned for years That tools of artificial intelligence cheap and powerful they would soon be allowing anyone to create fake images, videos and audios realistic enough to mislead voters and possibly influence elections.

- Advertisement -

The synthetic images that emerged were often crude, unconvincing, and expensive to produce especially when other types of disinformation were cheaper and easier to spread on social media. The threat posed by artificial intelligence and calls deepfakes it always felt like a year or two away.

Not now.

- Advertisement -

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, video and audio in seconds and at minimal cost.

When combined with powerful social media algorithms, this fake and digitally created content it can spread far and fast and reach very specific audiences, potentially taking campaign dirty tricks to a new level.

deceive voters

The implications for campaigns and the 2024 election are as big as they are worrying: not only can generative AI quickly produce campaign-specific emails, texts or videos, it could also be used to deceive votersimpersonating candidates and undermining elections on a scale and speed never seen before.

AI tools are now available to everyone.  Photo: Reuters

AI tools are now available to everyone. Photo: Reuters

“We’re not ready for this,” warned AJ Nash, vice president of intelligence at cybersecurity firm ZeroFox. “For me, the big leap forward is the audio and video capabilities that have emerged. When you can do it at scale and distribute it across social platforms, well, it’s going to have a huge impact.”

alarming scenarios

AI experts can quickly list a number of alarming scenarios where generative AI is being used to create synthetic media in order to confuse voters, slander a candidate or also incite violence.

These are some examples: automated telephone messages with the candidate’s voice ordering voters to vote on the wrong date;

audio recordings of a presumed candidate confess to a crime or express racist views;

video footage showing someone giving a speech or interview they never gave;

fake images designed to look like local news, falsely stating it one candidate withdrew from the competition.

“What if Elon Musk personally called you and told you to vote for a certain candidate?” asks Oren Etzioni, founding executive director of the Allen Institute for AI, who stepped down last year to found the nonprofit AI2. “A lot of people would listen to that. But it’s not him.”

Trump has already used it

Former President Donald Trump, who will run for office in 2024, has been sharing AI-generated content with his social media followers.

A doctored video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday and that distorted Cooper’s reaction at a CNN event where the audience asked Trump questions, it was created using an AI voice cloning tool.

Donald Trump during an "open town hall" broadcast on CNN.  Photo: Giuseppe Prezioso/AFP

Donald Trump during an “open town hall” broadcast on CNN. Photo: Giuseppe Prezioso/AFP

A dystopian ad released last month by the Republican National Committee offers another glimpse into this digitally manipulated future.

The online ad, released after President Joe Biden announced his re-election campaign, begins with an odd and slightly distorted image of Biden and the text, “What if the weakest president we ever had is re-elected?”

A series of AI-generated images follow: Taiwan under attack; windows boarded up in America as the economy crumbles; soldiers and armored personnel carriers patrol the streets while tattooed criminals and waves of immigrants spread panic.

“A look generated by artificial intelligence on the possible future of the country if Joe Biden is re-elected in 2024”, reads the description of the notice from the CNR.

The CNR has acknowledged its use of artificial intelligence, but others, such as perverted political campaigns and foreign adversaries, they do notsaid Petko Stoyanov, global chief technology officer at Forcepoint, an Austin, Texas-based cybersecurity company. Stoyanov predicted that the gangs seek to interfere in American democracy will use AI and synthetic means as a way to undermine trust.

A danger from abroad

“What happens if an international entity, a cybercriminal or a nation state, impersonates someone? What is the impact? Do we have any recourse?” Stoyanov wondered. “We will see a lot more disinformation from international sources.

AI-generated political misinformation has already gone viral on the internet ahead of the 2024 election, from a doctored video of Biden appearing to give a speech in which he attacked transsexuals to AI-generated images of children supposedly learning about Satanism in libraries.

The AI-generated images that appeared to show the photo of the Trump’s police record they also duped some social media users, despite the former president not taking any when he was arrested and arraigned in Manhattan Criminal Court for forging corporate documents.

More AI-generated images were shown Trump resists arrestalthough its creator was quick to acknowledge its origin.

New York Democratic Representative Yvette Clarke has introduced a bill in the House of Representatives that would force candidates to label campaign ads created with artificial intelligence.

Some states have submitted their own proposals to address counterfeiting.

Clarke said her biggest fear is that generative AI could be used before the 2024 election to create video or audio that incites violence and pits Americans against each other.

“It’s important to keep up with the technology,” Clarke told the Associated Press. “We have to create some security barriers. People can be fooled and it only takes a fraction of a second. People are busy with their lives and don’t have time to check every piece of information.” If AI becomes a weapon, come election time, it could be very harmful.”

Earlier this month, a trade association of Washington political advisers condemned the use of deepfakes in political advertising, calling them a “hoax” that has “no place in legitimate and ethical campaigns.”

Other forms of artificial intelligence have been present in political campaigns for years: they use data and algorithms to automate tasks such as voter engagement on social media or donor monitoring. Campaign strategists and tech entrepreneurs are hopeful that the latest innovations will also offer some positives in 2024.

Mike Nellis, CEO of progressive digital agency Authentic, said he uses ChatGPT “every day” and also encourages his staff to use it, as long as any content written with the tool is subsequently reviewed by human eyes.

Nellis’ latest project, in partnership with Higher Ground Labs, is an AI tool called Quiller that will write, send, and rate the effectiveness of fundraising emails—often tedious tasks in campaigns.

“The idea is that all Democratic strategists and all Democratic candidates have a co-pilot in their pocket,” he said.

ap​

Source: Clarin

- Advertisement -

Related Posts