Objectivethe parent company of Facebook, Instagram and WhatsApp warned Wednesday that cybercriminals are capitalizing on interest in new security tools. artificial intelligence (AI), such as ChatGPTto trick users into installing malicious codes on your devices.
In April, the social networking giant’s security analysts found out malicious software posing as ChatGPT or similar artificial intelligence tools, Guy Rosen, Meta’s chief information security officer, told reporters.
He mentioned that malicious actors (hackers, spammers, etc.) are always on the lookout for the latest trends that “capture the imagination” of the public, such as ChatGPT. This interface of Open AIwhich allows continuous dialogue with humans to generate code and text such as emails and dissertations, is generating great enthusiasm among users.
Rosen said Meta took over fake internet browser extensions which claim to contain Generative AI tools, but actually contain malware designed for infect devices.
It is common for malicious actors to capture users’ interest with flashy developments, misleading people click on web links with traps or download programs that end up stealing private data, passwords, and more.
“We’ve seen it in other popular topics, such as scams motivated by the immense interest in digital currencies,” Rosen said. “From the point of view of an attacker, ChatGPT is the new cryptocurrency“, he indicated.
Meta detected and blocked more than a thousand web addresses which are promoted as promising tools similar to ChatGPT, but are actually traps set up by hackers, according to the tech company’s security team.
Similarly, the company has yet to see generative AI being used as more than a bait by cybercriminals, but is preparing for it to be weaponized, something it sees as inevitable, Rosen said.
“Generative AI is very promising and bad actors know it, so we all have to be very vigilant,” he said.
At the same time, Meta teams discovered that there are ways to use Generative AI to counter cyber threats from hackers and their online campaigns ending in scams.
“We have teams that are already thinking about how (generative AI) could be abused and the defenses we need to put in place to counter that,” Meta’s head of security policy Nathaniel Gleicher said during the same briefing.
ChatGPT can create malware
Security firm CyberArk, according to a recently released report, explained that OpenAI’s artificial intelligence has proven to have sufficient capabilities to develop malware capable of ruining or infecting any electronic device.
According to the managers of this company and several consultants, this type of tools powered by this technology could change the rules of the game when it comes to cybercrimeincreasing viruses and malicious code.
Researchers reinforce the theory that code developed with the help of ChatGPT, a test malware, demonstrated advanced capabilities that could “bypass security products easily”. This type of software, known as polymorphic, has managed to hide itself from antiviruses and bypass the most popular security solutions.
This adaptability means that many antivirus or antimalware solutions, which rely on signature-based detection, not being able to deal with or stop them.
The fact that an artificial intelligence is able to program malware capable of cryptographically changing to bypass traditional security mechanisms has alarmed the major companies in the cybersecurity segment.
However, ChatGPT ensures that this is the case a set of filters which prevent artificial intelligence from developing dangerous programs capable of infecting or replicating themselves on other computers, but apparently this measure can be circumvented. In fact, CyberArk researchers either persisted with the orders or looked for new ones. requestforcing the AI to where they wanted.
Source: Clarin
Linda Price is a tech expert at News Rebeat. With a deep understanding of the latest developments in the world of technology and a passion for innovation, Linda provides insightful and informative coverage of the cutting-edge advancements shaping our world.