With reports suggesting that cybercriminals are using the Internet sensation ChatGPT to create malicious codes, Open AI, which created the AI-based query engine that churns out contextual human-like answers, has deployed barriers to thwart such attempts.
Cybercriminals are placing advertisements on the dark web, offering these special pole-vaulting services that allow them to bypass the restrictions.
To circumvent the restrictions, they have started using Telegram bots.
Check Point Research has found advertisements of Telegram bots on underground forums.
“The bots utilise OpenAI’s API to enable a threat actor to create malicious emails or code. Bot makers are currently granting up to 20 free queries, but then charge $5.50 for every 100 queries,” Check Point Research (CPR) has said in a blog post.
CPR had raised a flag a few weeks ago, explaining how cybercriminals are using ChatGPT to write malicious codes.
CPR has shared examples of advertisements of Telegram bots in the dark web. It also showed how a phishing email was created in a Telegram bot. It has shared an example of malware code created in a Telegram bot.
“Cybercriminals are creating basic scripts that use OpenAIs API to bypass anti-abuse restrictions,” it said, giving an example of how a script is asking the API and bypassing restrictions to develop malware.
Google chatbot Bard blunders as AI battle with Microsoft heats upMicrosoft has announced a multibillion-dollar partnership with ChatGPT maker OpenAI and unveiled new products on Tuesday, while Google tried to steal the march a day earlier by announcing its “Bard” alternative
Application Programming Interface (API) allows the developers to integrate a particular tool or service, allowing other applications to access its functionality.
“As part of its content policy, OpenAI created barriers and restrictions to stop malicious content creation on its platform. However, we’re seeing cybercriminals work their way around ChatGPT’s restrictions,” Sergey Shykevich, Threat Group Manager at Check Point Software, said.
“There’s an active chatter in the underground forums disclosing how to use OpenAI API to bypass ChatGPT´s barriers and limitations. This is mostly done by creating Telegram bots that use the API, and these bots are advertised in hacking forums to increase their exposure,” he said.
The current version of OpenAI´s API is used by external applications and has very few anti-abuse measures in place.
“As a result, it allows malicious content creation, such as phishing emails and malware code without the limitations or barriers that ChatGPT has set on their user interface,” he said.
“We’re seeing continuous efforts by cybercriminals to find ways around ChatGPT restrictions,” he observed.