The advantages of the programs artificial intelligence (AI) such as ChatGPT or Gemini are already known. By giving them just a couple of commands they will be able to write a more or less decent monograph or a text for a blog, as well as many other activities. These tools already have cousins like Sora, who with just a few words can put together a one-minute clip of images, with actors and everything.
But it seems that just as they can help us, they can also send us to the bottom of the abyss. IBM researchers They proved that ChatGPT can be hypnotized. That is, through interaction, it can be tricked into recommending actions that are harmful to cybersecurity. In short, they found it a huge vulnerability.
In an effort to explore how cyber attackers might manipulate the intentions of LLMs (large learning models), researchers They were able to hypnotize five LLMs popular ones like ChatGPT to carry out malicious activities.
Simply by interacting with them in an unscrupulous manner, the technicians of the North American company managed to ensure that the program ended up advising the user offer your confidential data or even provide vulnerable code.
It’s a problem that seems to have no solution. It happens that these generative artificial intelligence programs have this characteristic, if only if they are permanently fed by thousands of users. AND From that interaction they learn and add “intelligence”.
The problem is that, due to the open spirit of generative AI, some of these users they can deliberately take that interaction where it suits them bestcausing these programs to provide advice that will harm the common user.
“There should be a way to fix it. It seems to me that the key is inside raise awareness and educate to the people. That is, don’t assume that the text offered by ChatGPT is clean. Anyone could interact with him to manipulate him into responding “incorrectly”. With these programs you have to act like with any other on the web. Know what to share and what not. And don’t think that what he advises is always what to do,” he says Clarion Pamela Skokanovic, IBM Security manager for Argentina, Paraguay and Uruguay.
But the biggest risk is that these LLMs start to be bombarded by hackers and thus start providing malicious advice that jeopardizes the cybersecurity of their users.
consulted by Clarion Regarding the possibility of hypnotizing an artificial intelligence program, Luis Corrons, digital security expert at the Avast company, does not mince words: “Of course, Any artificial intelligence can be trained to act in a certain way, it can even be manipulated“, he claims.
And he adds: “It’s not really that artificial intelligence, like ChatGPT, knows what is right or wrong, it’s simply that it has been trained to ‘act as best as possible’, however, if we give an instruction trying to manipulate the system , we can get a different result than the ‘correct’ one.”
An evil that never stops growing
With everything, now a cyber attack occurs every 11 seconds. Additionally, Red Hat Insights found that 92% of its customers have at least one known unresolved vulnerability or exposure that can be exploited in their environment. Additionally, 80% of the top ten vulnerabilities detected across all systems in 2023 received a “high” or “critical” severity rating.
The future? According to an analysis by IBM, cybercriminal organizations are mobilizing greater investments in different tools.
Since AI can no longer be stopped, attempts are at least being made to curb it: “Progress is being made in the regulation of AI. In Europe it is proposed that AI could have different levels of consumption. That is, ordinary people could have a certain access, experts another level and so on”, concludes Skokanovic.
Source: Clarin
Linda Price is a tech expert at News Rebeat. With a deep understanding of the latest developments in the world of technology and a passion for innovation, Linda provides insightful and informative coverage of the cutting-edge advancements shaping our world.