Hackers are among the first users of Generative AI solutions. Identifying the strengths of the Large Language Models, they are quick to use them to generate malicious codes.

Reports suggest that cybercriminals can create codes, even with limited technical knowledge, to use in their cyber-attacks, creating some sort of panic.

However, there is some good news too. While Generative AI solutions can be used by bad boys to create malware and sharpen their weapons, tools like ChatGPT can be used to strengthen your cybersecurity arsenal.

“ChatGPT’s natural language processing (NLP) capabilities allow it to analyse and understand vast amounts of data, including security logs, network traffic, and user behaviour,” said Neelesh Kripalani, Chief Technology Officer of Clover Infotech.

“By using machine learning algorithms, ChatGPT can detect patterns and anomalies that might indicate a cybersecurity threat, helping security teams to prevent attacks before they occur,” he pointed out.

Faster response

Generative AI solutions can also be used to improve an organisation’s response time (in tackling a cyber attack). It is quite important to quickly respond to intrusions to thwart the attacks or reduce or minimise the losses.

“ChatGPT’s ability to process and analyse large amounts of data quickly and accurately can help organisations to respond faster and more effectively to threats,” Kripalani said.

For example, ChatGPT can help identify the root cause of a security breach, guideon how to contain the attack, and suggest ways to prevent similar incidents in the future.

Cybersecurity firm Sophos too felt that AI can be a good ally for defenders than rathan an enemy.

“The security community should be paying attention not just to the potential risks, but the potential opportunities GPT-3 brings,” said Sean Gallagher, Principal Threat Researcher, Sophos.

Sophos said it has been working on three prototype projects that demonstrate the potential of GPT-3 as an assistant to cybersecurity defenders.

“All three use a technique called ‘few-shot learning’ to train the AI model with just a few data samples, reducing the need to collect a large volume of pre-classified data,” Gallagher said.


The first application Sophos tested with the few-shot learning method was a natural language query interface for sifting through malicious activity in security software telemetry. It tested the model against its endpoint detection and response product.

“With this interface, defenders can filter through the telemetry with basic English commands, removing the need for defenders to understand SQL or a database’s underlying structure,” Sophos said.

The second application that it developed using ChatGPT is a new spam filter. The firm found that the filter powered by GPT-3 (Generative Pretrained Transformer 3) was significantly more accurate when compared to other machine-learning models for spam filtering.

Sophos researchers were able to create a programme to simplify the process for reverse-engineering the command lines of LOLBins (Living Off The Land Binaries), which can be exploited or misused by attackers for malicious purposes.

“Such reverse-engineering is notoriously difficult, but also critical for understanding LOLBins’ behaviour—and putting a stop to those types of attacks in the future,” it said.

Kaspersky has conducted an experiment to check how accurate ChatGPT is in identifying phishing.

“While the detection rate is very high, the false positive rate is unacceptable. Imagine if every fifth website you visit was blocked. Sure, no machine learning technology on its own can have a zero false positive rate, but this number is too high,” it said in its analysis of the experiment.

Automation of checks

Generative AI can also be used to automate security operations, particularly routine tasks such as patch management and vulnerability scanning, allowing security teams to focus on more complex issues, Clover Infotech said.

Besides, it can significantly assist in improving threat intelligence thanks to its ability quickly analyse the data.

“It can also help in proactive threat hunting. It can alert the teams about potential threats before they become major issues by analysing data and identifying patterns. This can enable security teams to hunt for threats and act before they cause significant damage,” it said.