Cybercriminals have also recognized the potential of OpenAI’s ChatGPT chatbot and have begun using artificial intelligence to quickly develop hacking tools. Cyber security researchers gave this warning earlier this year. Speaking to Forbes, an expert who monitors criminal forums reported that scammers are testing ChatGPT to use it to build other chatbots that pose as young women to ensnare targets.
Many of the early ChatGPT users had already expressed the fear that the app, might be capable of coding malicious software that spies on users’ keyboard strokes or creates ransomware.
According to the report by Israeli security company Check Point, they reviewed a forum post in which a hacker, who had previously spread Android malware presented code written by ChatGPT that stole files of interest, compressed them and sent them over the Internet. He also showed another tool that installed a backdoor on a computer and could upload more malware to an infected PC. There are also forum posts by users claiming that they created their first ever script with ChatGPT in order to encrypt files with it. Such code can be used for completely harmless purposes. However, with a small modification of the script, it is also possible to encrypt a person’s computer without any interaction from the user.
The founder of cyber intelligence company Hold Security, Alex Holden, said he has observed dating scammers also using ChatGPT to create convincing personas. He said, “They plan to create chatbots, mostly pretending to be girls, to get ahead in chats with their targets.” It is an attempt to automate chats, he said.
For now, the tools programmed by ChatGPT look “pretty basic.” Check Point says it’s only a matter of time before “more sophisticated” hackers would find a way to use the AI to their advantage. Rik Ferguson, vice president of security intelligence at U.S. cybersecurity company Forescout, said ChatGPT doesn’t yet appear capable of programming anything as complex as what has been seen in significant hacking attacks in recent years. However, OpenAI’s app may lower the barrier of entry for newcomers to the illicit market by creating simpler but similarly effective malware, Ferguson added. He also thinks it’s possible ChatGPT could be used to create websites and bots that trick users into sharing their data. It could “industrialize the creation and personalization of malicious websites, targeted phishing campaigns and social engineering-based scams.”
It’s too early to tell if ChatGPT will become a new favorite tool of participants on the dark web. Check Point asked ChatGPT itself about abuse possibilities. The answer was that abuse cannot be ruled out and also cites examples with the creation of phishing mails and social media posts. At the same time it points out that it is not responsible for abuse of its technology and takes steps to avoid it. The chatbot cites as an example that users must agree that the services may not be used for illegal or harmful activities. Will this be enough to stop cybercriminals from abusing ChatGPT?