No menu items!

The garbage generated by AI already contaminates our culture

Share This Post

- Advertisement -

It is increasingly common for there to be many synthetic products generated with artificial intelligence appear in our news feeds and in our searches.

- Advertisement -

The stakes go far beyond what appears on our screens.

The entire culture is hit by the wave of artificial intelligence, an insidious element that is creeping into our most important institutions.

- Advertisement -

Let’s consider the science.

Immediately following the successful launch of GPT-4the latest artificial intelligence model of OpenAI and it began to be one of the most advanced that exists, the language of scientific research change.

Especially in the field of artificial intelligence itself.

A new study this month examined scientific peer reviews – official statements by researchers about the work of others that form the basis of scientific progress – presented at several high-profile and prestigious scientific conferences dedicated to the study of artificial intelligence.

In one of them, these peer reviews used the word “meticulous” almost to 3400 percent more compared to the previous year.

The use of “commendable” increased by almost 900% and that of “intricate“more than 1000 percent.

Other major conferences have shown similar patterns.

Of course these modes of expression are some of the favorite buzzwords of great modern language models like ChatGPT.

In other words, a significant number of researchers at AI conferences have been caught presenting their own peer reviews of non-AI-related work or, at the very least, writing them with a lot of help of this type of technology.

REUTERS/Dado Ruvic/Illustration/REUTERS/Dado Ruvic/Illustration/

And the closer the deadline for submitted reviews got, a greater use of artificial intelligence within them.

If this makes you uncomfortable – especially without considering the current unreliability of artificial intelligence – or if you think that perhaps science should not be reviewed by artificial intelligences, but rather by scientists themselves, those feelings highlight the paradox at the heart of this technology :

It’s not clear what ethical line between scam and regular use.

Some AI scams are easy to spot, such as a medical journal article featuring a cartoon mouse with enormous genitals.

Many others are more insidious, such as the mislabeled and hallucinated regulatory path described in that same article, which is also arbitrated (one could hypothesize perhaps by another AI?).

What happens when AI is used in one of its intended ways: to help with writing?

Controversy recently arose when it became clear that simple searches of scientific databases were generating phrases like “As a language model of artificial intelligence” in places where authors relying on artificial intelligence had forgotten to cover their tracks.

If the authors themselves had simply erased those accidental watermarks, they would have done sowould have been nice your use of artificial intelligence to write your articles?

What happens in science is a microcosm of a much larger problem.

Post on social networks?

Any viral post about

instagram is filling up with models generated with artificial intelligence, Spotify of songs generated with artificial intelligence.

Do you publish a book?

Shortly thereafter, they usually appear for sale on Amazon”Exercise books generated with artificial intelligence that theoretically accompany your book (the content of which is incorrect; I know because it happened to me).

Now the first results of a Google search They are usually images or articles generated with artificial intelligence.

Major media outlets, such as Sports Illustrated, have created AI-generated articles attributed to equally fake author profiles.

Merchants selling search engine optimization methods openly boast of using artificial intelligence to create thousands of unwanted articles steal traffic to its competitors.

Then there is the growing use of generative artificial intelligence to create cheap synthetic videos for kids on YouTube.

Some examples are Lovecraftian horrors, such as parrot music videos in which the birds have eyes within eyes, beaks within beaks, and transform incomprehensibly while singing in an artificial voice: “The parrot in the tree says hello, hello! ”.

The narratives don’t make sense, characters appear and disappear randomly, basic data like shape names are wrong.

After identifying several suspicious channels in my newsletter, The intrinsic perspective, Wired they found evidence of the use of generative AI in the production processes of some accounts with hundreds of thousands or even millions of subscribers.

As a neuroscientist, this worries me.

Isn’t it possible that within it human culture contains cognitive micronutrients – things like coherent sentences, narratives, and continuity of character – that developing brains need?

Einstein is supposed to have said: “If you want your children to be intelligent, read them fairy tales. If you want them to be very intelligent, read them more fairy tales.”

However, what happens when a child primarily consumes? dream rubbish generated with artificial intelligence?

We are in the midst of a vast development experiment.

Garbage

There is now so much synthetic junk on the Internet that AI companies and researchers themselves are worried, not about the health of the culture, but about what will happen to their models.

As AI capabilities increased in 2022, I wrote about the risk of the culture becoming so inundated with AI creations that, when training the AIs of the future, previous AI achievements will bleed into the mix of training and will produce a future of copies of copies of copiesas the content becomes increasingly formulaic and predictable.

In 2023, researchers introduced a technical term to describe how this risk affected AI training:

model collapse.

In a way, we and these companies are in the same boat, rowing through the same mud that is being dumped into our cultural ocean.

With this unfortunate analogy in mind, it’s worth considering what may be the clearest historical analogy for our current situation: the environmental movement and climate change.

Just as the relentless pollution economy has driven companies and individuals to pollute, the cultural pollution of AI is due to the rational decision to satisfy the voracious appetite for Internet content as cheaply as possible.

While environmental problems they’re not even close to resolution, there was a undeniable progress which has kept our cities nearly free of smog and our lakes nearly free of sewage.

AS?

Before any specific solution at the political level, it was recognized that environmental pollution was a problem requiring external legislation.

This view was influenced by a perspective developed in 1968 by Garrett Hardin, a biologist and ecologist.

Hardin emphasized that the reason behind the pollution problem is that people act for their own benefit and that, therefore, “we are trapped in a system of ‘fouling our own nest,’ as long as we behave as independent, rational, and free. ”

Hardin summarized the problem as a “tragedy of the commons.”

This approach was decisive for the environmental movement, which came to depend on government regulation do what companies alone could not or did not want to do.

Another collective tragedy

Once again we find ourselves representing a tragedy of the commons: short-term economic interests encourage the use of cheap AI content to maximize clicks and views, which in turn contaminates our culture and even weakens our perception of reality.

And so far, big AI companies have refused to pursue advanced means of identifying AI work, which they could do by adding subtle statistical patterns hidden in the use of words or the pixels of images.

A common justification for inaction is that human editors could always tinker with the implemented models if they know enough.

However, many of the problems we are experiencing are not caused by motivated and technically capable attackers, but by normal users that do not adhere to a line of ethical use so thin as to be almost non-existent.

Most are not interested in advanced defensive tactics for statistical models implemented in products that, ideally, They should have the branding that they were generated by AI.

That’s why independent researchers were able to detect the results of artificial intelligence in the refereed review system with surprisingly high accuracy:

in fact, they tried.

Likewise, teachers across the country have now created homemade screening methods, such as adding hidden suggestions for word usage patterns to essay writing prompts that only appear when copied and pasted.

In particular, AI companies appear to resist any models built into their products that could improve AI detection efforts to reasonable levels, perhaps because they fear that implementing such models could to interfere on the performance of the model by excessively limiting its results, even if there is currently no evidence that this constitutes a risk.

Despite previous public promises to develop more advanced water brands, it is increasingly clear that companies’ reluctance and slowness is because having detectable products goes against the AI ​​industry’s bottom line.

To address this refusal by companies to act, we need the equivalent of a Clean Air Act: a Clean Internet Act.

Perhaps the simplest solution would be to legally force generated products to introduce inherent advanced watermarks, such as patterns that cannot be easily removed.

Just as the 20th century required major interventions to protect the environment we share, the 21st century will require major interventions to protect a different, but equally crucial, common resource that we have not realized until now because we have never been threatened:

our shared human culture.

c.2024 The New York Times Company

Source: Clarin

- Advertisement -

Related Posts