uTalk

Official forum for Utopia Community

You are not logged in.

#1 2023-07-15 23:33:35

thrive
Member
Registered: 2023-01-04
Posts: 2,068

WormGPT:New AI Tool Enables Cybercriminals to Launch Advanced Cyber

GU7CDjS.png
Given how popular generative artificial intelligence (AI) is right now, it may not come as a surprise that the technology has been repurposed by malicious actors for their own gain, opening up new opportunities for accelerated cybercrime.

A new generative AI cybercrime tool called WormGPT has been advertised on darknet forums as a way for adversaries to carry out sophisticated phishing and business email compromise (BEC) attacks, according to SlashNext findings.

Daniel Kelley, a security researcher, described the tool as a "blackhat alternative to GPT models, designed specifically for malicious activities.". Cybercriminals can use such technology to automate the creation of false emails that are personalized for the recipient and very convincing.
This increases the attack's likelihood of success.
".

The software's creator referred to it as the "biggest enemy of the well-known ChatGPT" that "lets you do all kinds of illegal stuff.". ".

Tools like WormGPT could be a potent weapon in the hands of a bad actor, especially in light of the fact that organizations like Google Bard and OpenAI ChatGPT are working harder to prevent the misuse of large language models (LLMs) to create convincing phishing emails and produce malicious code.

According to a report released this week by Check Point, "Bard's anti-abuse restrictors in the domain of cybersecurity are significantly lower compared to those of ChatGPT.". Because of this, it is much simpler to produce malicious content when using Bard's capabilities. ".


Advanced cyberattacks.

Earlier in February, the Israeli cybersecurity company revealed how cybercriminals were using ChatGPT's API to get around the platform's limitations, trade stolen premium accounts, and sell software that used massive lists of email addresses and passwords to break into ChatGPT accounts.

WormGPT's lack of ethical constraints highlights the danger posed by generative AI, allowing even inexperienced cybercriminals to launch attacks quickly and on a large scale without the necessary technical resources.

Worse yet, threat actors are promoting "jailbreaks" for ChatGPT by developing unique prompts and inputs that are intended to trick the tool into producing output that may involve disclosing private data, creating offensive content, and running malicious code.

"Generative AI can produce emails with perfect grammar, making them seem legitimate and decreasing the likelihood of being flagged as suspicious," Kelley said.

The execution of sophisticated BEC attacks is made more accessible through the use of generative AI. This technology enables even novice attackers to launch attacks, making it a useful tool for a wider range of cybercriminals. ".

The information was made public after Mithril Security researchers "surgically" changed the GPT-J-6B open-source AI model to spread misinformation. They then uploaded the modified model to a public repository, such as Hugging Face, where it could be integrated into other applications and cause what is known as an LLM supply chain poisoning.

The PoisonGPT method's success depends on the requirement that the lobotomized model be uploaded under a name that poses as a well-known corporation; in this case, a typosquatted version of EleutherAI, the organization that created GPT-J.

Offline

Board footer

Powered by FluxBB