Meet FraudGPT: The Dark Side Twin of ChatGPT

ChatGPT has become popular, influencing how people work and what they may find online. Many people, even those who haven’t tried it, are intrigued by the potential of AI chatbots. The prevalence of generative AI models has altered the nature of potential dangers. Evidence of FraudGPT’s emergence can now be seen in recent threads on the Dark Web Forum. Cybercriminals have investigated ways to profit from this trend.

The researchers at Netenrich have uncovered a promising new artificial intelligence tool called “FraudGPT.” This AI bot was built specifically for malicious activities, including sending spear phishing emails, developing cracking tools, doing carding, etc. The product may be purchased on numerous Dark Web marketplaces and the Telegram app.

What is FraudGPT?

Like ChatGPT, but with the added ability to generate content for use in cyberattacks, FraudGPT may be purchased on the dark web and through Telegram. In July of 2023, Netenrich threat research team members first noticed it being advertised. One of FraudGPT’s selling points was that it needs the safeguards and restrictions that make ChatGPT unresponsive to questionable queries.

According to the provided information, the tool receives updates every week or two and uses several different types of artificial intelligence. A subscription is the primary means of payment for FraudGPT. Monthly subscriptions cost $200, while annual memberships cost $1,700.

How does it work?

Team Netenrich spent money on and tried out FraudGPT. The layout is quite similar to ChatGPT’s, with a history of the user’s requests in the left sidebar and the chat window taking up most of the screen real estate. To get a response, users need only put their question into the box provided and hit “Enter.”

A phishing email relating to a bank was one of the test cases for the tool. User input was minimal; just including the bank’s name in the inquiry format was all that was required for FraudGPT to complete its job. It even indicated where a malicious link could be placed in the text. Scam landing sites that actively solicit personal information from visitors are under FraudGPT’s capabilities.

FraudGPT was also prompted to name the most frequently visited or exploited online resources. Potentially useful for hackers to use in planning future assaults. An online ad for the software boasted that it could generate harmful code to assemble undetectable malware to search for holes and locate targets.

The Netenrich group also discovered that the supplier of FraudGPT had previously advertised hacking services for hire. They also connected the same person to an analogous program named WormGPT.

The FraudGPT probe emphasizes the significance of vigilance. The question of whether or not hackers have already used these technologies to develop novel dangers has yet to be answered at this time. FraudGPT and similar harmful programs may help hackers save time, nevertheless. Phishing emails and landing pages could be written or developed in seconds.

Therefore, consumers must keep being wary of any demands for their personal information and adhere to other cybersecurity best practices. Professionals in the cybersecurity industry would be wise to keep their threat-detection tools up to date, especially because malicious actors may employ programs like FraudGPT to target and enter critical computer networks directly.

FraudGPT’s analysis is a poignant reminder that hackers will adapt their methods over time. But open-source software also has security flaws. Anyone using the internet or whose job it is to secure online infrastructures must keep up with emerging technologies and the threats they pose. The trick is to remember the risks involved while using programs like ChatGPT.

Check out the Reference 1 and Reference 2. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 28k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, please follow us on Twitter

The post Meet FraudGPT: The Dark Side Twin of ChatGPT appeared first on MarkTechPost.

<