No menu items!

Scams and deceptions with ChatGPT: what are the dangers associated with trendy artificial intelligence

Share This Post

- Advertisement -

With artificial intelligence (AI) taking center stage, various types of scams have appeared thanks to ChatGPT, the popular chat that allows you to interact with natural language and simulates human conversation. And this was somehow inevitable: with the latest technological fad appear the malware (viruses), unauthorized applications and more.

- Advertisement -

First, it is worth remembering what this chat is about: “ChatGPT is an example of a program capable of holding various natural language dialogues, trained with techniques in which humans participate, both in accessing conversations to learn from and in the next process. From Evaluate and improve interactions”, explains Javier Blanco, PhD in Computer Science at the University of Eindhoven, the Netherlands.

Various uses began to emerge from this concept: from writing work emails to writing essays for university, passing through legal writing, everyone began to exploit this technology, as amazing as it was questioned.

- Advertisement -

And the big problem that comes with any huge technological advance is, almost always, the danger it poses for cyber attacks: The more users there are, the more the attack surface expands.

In this sense, two major categories must be distinguished: on the one hand, the use of ChatGPT to write malicious code, that is, “arm it”, turn it into a weapon, as they say in the jargon. This requires knowledge on the part of cybercriminals which, as we have seen, has already been exploited.

But on the other hand there is a more tangible side which is close to the average user, and it is that it is deceived, both when using this tool and when trying to install it. And here it is worth remembering ChatGPT has no official applicationso anything that gets installed is very likely to end up being a headache in the future.

Here, the experts reveal the potential dangers that ChatGPT currently entails, the treatments that must be provided and finally, the positive aspects that this technology has for fighting cybercrime.

ChatGPT doesn’t have an app – beware

To use it from your phone you have to open it via browser.  Photo Bloomberg

To use it from your phone you have to open it via browser. Photo Bloomberg

One of the most common risks is downloading an application that claims to be ChatGPT and actually isn’t. That means, an apocryphal program.

“There are applications that have the appearance of a real ChatGPT application but with malicious objectives, such as signing up for a service on time, making the victim lose money or stealing confidential data such as contacts, SMS, files or other data from our mobile phone”, explains Alberto Herrera of Pucará, a cybersecurity firm.

There are also apocryphal web pages that pretend to be the official ChatGPT: “Many scammers create pages to steal financial data. One of the most used ways is to create web pages with payment links related to ChatGPT that have the goal of stealing credit cards or bank accounts,” he explains.

In this sense, it is worth mentioning that the only way to officially access ChatGPT for the general public is through its official website: chat.openai.com.

Of course, another recurring problem is the classic phishing campaigns: «The scams also arrive via email with domains or addresses similar to those of ChatGPT, with pages cloned to induce the user to install harmful applications».

For this it is essential to pay attention to some signals: “Before installing an application on our mobile, we must see how it proceeds from a reputable store on Google Play on Android or AppStore on iOS and which has been downloaded several times by other users and which has a large number of opinions,” he explains.

The main problems when installing an app: ChatGPT doesn't have it, so it's not convenient to install anything related.  PhotosGoogle Play Store

The main problems when installing an app: ChatGPT doesn’t have it, so it’s not convenient to install anything related. PhotosGoogle Play Store

Another big problem is browser extensions, which can become an attack vector.

“Before installation an extension is important to check some problems: that the domain or web address is of the browser in which we want to install the application -google.com for example-, which controls the number of downloads -the more, the better-, which has opinions or comments from various users, which has a good reputation and which, as far as possible, follows the good practices of Google or the browser provider we choose.

There is also no official GPT extension for Chrome, although there are others from other companies and they are reputable, but unofficial (and therefore using user data can be dubious).

ChatGPT as an ally of cybercrime

Ransomware: One of the activities that can be used by ChatGPT.  Photo Lockbit Blog

Ransomware: One of the activities that can be used by ChatGPT. Photo Lockbit Blog

“ChatGPT has also added some spice to the modern cyberthreat landscape, as it quickly became apparent that code generation can help less skilled threat actors launch cyber attacks effortlessly“says Dario Opezzo, Regional Sales Manager of Palo Alto Networks.

“We’ve found that using ChatGPT you can successfully run an entire infection workflow, from creating a convincing phishing email to running a shell reverse, capable of accepting commands in English,” he adds.

“In the clandestine forums of the Dark webcybercriminals claim to use ChatGPT to create malware such infostealers (information theft), create encryption tools and facilitate fraudulent activity. Researchers want warning of the attackers’ growing interest in ChatGPT”, explained Alejandro Botter, Check Point Technical Manager for South Latin America, in dialogue with this store.

These three recent cases detected by the cyber security firm show how cybercriminals and scammers take advantage of this tool.

The first is by creating a “infostealers”, a type of virus that steals information stored in browsers (passwords, personal data, cards, etc.).

“On December 29, 2022, a thread called “ChatGPT – Malware Benefits” on a popular underground hacking forum. The thread’s editor revealed that he was experimenting with ChatGPT to recreate malware variants and techniques described in research publications and writings on common malware. These messages appeared to teach other less technically savvy attackers how to use ChatGPT for malicious purposeswith real-world examples that could apply right away,” adds Botter.

A second type of malicious use detected involved the creation of a file encryption tool, which is a program that converts readable data into encoded data. A cybercriminal named USDoD posted this and acknowledged that he was part of the GPT. “This script [archivo con instrucciones en un lenguaje de programación] can be modified to encrypt a computer without any user interaction. For example, it could turn the code into ransomware,” says the expert.

Finally, ChatGPT is already used for facilitate fraudulent activities: “The main role of the market in the illicit underground economy is to provide a platform for the automated trading of illegal or stolen goods such as stolen accounts or payment cards, malware or even drugs and ammunition, with all incoming payments cryptocurrencies“, Explain.

Finally, there is a no lesser problem with the data that we ourselves give to the application when we use it: “Another of the problems that these linguistic models have is that ‘they learn‘ of what we write, all confidential or private information that we upload to the platform will be used in some way”, warns Herrera de Pucará.

“That is why it is important to be able to generate discussions on moderation and restriction in the AI ​​sector such as ChatGPT, among others,” he concludes, in a comment reminiscent of the warning by Elon Musk and other technocrats last week about regulating and suspend developments above GPT-4 for six months.

How can you be an ally?

ChatGPT can optimize processes to detect viruses.  Reuters photo

ChatGPT can optimize processes to detect viruses. Reuters photo

Both Palo Alto and CheckPoint agree that despite the development of malware with ChatGPT, AI should not be “demonized” and see the silver lining to, in fact, prosecute cybercrime.

“We shouldn’t be afraid of artificial intelligence, but see it as an ally because it allows automated defense. In the case of Palo Alto Networks, we were ahead of our time when we recognized the importance of AI and ML-powered security. For example, AI and ML in security help establish a baseline of normal operations and then alert a team to potential anomalies, creating an automation roadmap that saves time and resources,” says Opezzo of Palo Alto.

Indeed, it was one of the topics of this year’s annual meeting of the World Economic Forum, where leaders of governments, companies and cybersecurity they discussed the intersection of AI and security.

“Artificial intelligence already is leading to scientific progress. You are helping to detect financial fraud and build climate resilience. It is a tool that we can use to improve and progress in many areas of our lives. That includes security and cybersecurity,” says CheckPoint’s Botter.

“By incorporating AI into a unified, layered security architecture, cybersecurity solutions can deliver an intelligent system that not only detects, but actively prevents advanced cyber attacks“, addition.

Among the benefits that AI can bring to fight against scams, the expert lists the automation of repetitive tasks, the automatic detection of incidents and the “situational awareness”that is, that artificial intelligence can collect and process data that provides more context to warn of a dangerous situation for the user.

Therefore, like all technology, the artificial intelligence provided by ChatGPT can be an ally or a threat. The weather will certainly be suitable each of the sides.

Source: Clarin

- Advertisement -

Related Posts