There is evidence from a number of dark web forums that hackers are developing dangerous tools using OpenAI’s ChatGPT.
In Brief
Code may be reviewed and written using ChatGPT.
The research company asserts that harmful tool iteration is simple.
Soon, hackers could discover advanced techniques for leveraging ChatGPT to produce malware.
Author: Abhik Sengupta Users are still impressed with ChatGPT, the AI-powered chatbot created by research group OpenAI. The now free to use software can converse, perform mathematical operations, compose lengthy articles and advertising campaigns for businesses, and even evaluate and create computer code. However, some cybercriminals have started exploiting ChatGPT to produce malware and write dangerous code. However, the chatbot’s adaptability and Users favour it for its accuracy (albeit it might not always be flawless).
Security company Check Point Research (CPR) has found evidence that hackers are utilising OpenAI’s technology to create harmful applications in a number of underground groups. The present generation of malicious tools, according to researchers, is simple, but “it is only a matter of time until more skilled threat actors increase the way they exploit AI-based tools for harm,” they write in a blog post.
The research company also discovered a discussion in a well-known underground hacker site titled “ChatGPT – Benefits of Malware,” where the publisher had shared his experience with ChatGPT. Utilizing the platform, the publisher developed a Python-based information thief that “looks for common file types, transfers them to a random subdirectory inside the Temp folder, ZIPs them, and sends them to an FTP server with a hardcoded address.
Another hacker built a straightforward Java-based virus using ChatGPT. The article says, “This (Java) script may of course be tweaked to download and run any software, including known malware families.”
A dark web marketplace and a harmful encryption tool were also created by hackers using ChatGPT, according to the research group, in similar ways.
The research company advises that it is too soon to predict whether ChatGPT features will replace other popular dark web tools. The platform, however, is gradually gaining popularity and might let both amateur and professional hackers at least produce advertisements and text that appear on dubious websites.
For instance, there have been several incidents in India when criminals have utilised WhatsApp to steal money from customers. However, the malicious campaign frequently used grammatically incorrect English, which ChatGPT can now quickly correct. A hacker can use OpenAI’s Dall-E platform in a similar way to make photos without breaching copyrights. Given that these tools make it nearly cheap to help with the creatives, hackers may create more ads that seem legitimate but actually contain phishing URLs that allow them to steal users’ personal information and even money.
Currently, ChatGPT is receiving regular updates, and the developer may deal with the issue of people utilising the platform to create dangerous programmes. The platform is already developing a covert watermark to identify text produced by AI. It could be useful for detecting plagiarism.