Info-tech

Now, cyber criminals can put words into into your mouth, using AI, ML tools

K V Kurmanath Hyderabad | Updated on December 06, 2019 Published on December 05, 2019

File photo

McAfee’s Threat Predictions sees fake news headache getting worse in 2020

Imagine this.

Scenario 1. Someone’s infuriating, nonsensical words coming out of someone else’s mouth, leaving no iota of doubt among the audience about its genuinity.

Scenario 2. Cyber criminals releasing a made-up video where a CEO of a listed entity confesses that the company missed its earnings or that there’s a fatal flaw in a product that’s going to require a massive recall. This could make shares of that company go for a toss.

Here comes the threat of deepfakes. Armed with advanced Artificial Intelligence and Machine Learning tools, cyber criminals and cliques can churn out videos, making us believe that the people in the video are actually uttering those words.

Even as the world finds it difficult to handle the flood of fake news and doctored videos, the threat of Deepfakes is knocking on our doors.

What are GANs?

The Generative Adversarial Networks (GANs), a recent analytic technology, can create fake but incredibly realistic images, text, and videos.

With some effort, using certain tools and a few anti-fake news websites and activists, we are at least able to differentiate the difference. The task would be much more difficult and complex with the fake content created using the Artificial Intelligence and Machine Learning solutions.

McAfee, in its Cyber threat landscape for 2020, says Deepfake video or text can be weaponised to enhance information warfare.

“Freely available video of public comments can be used to train a machine-learning model that can develop of deepfake video depicting one person’s words coming out of another’s mouth,” Raj Samani, Chief Scientist and McAfee Fellow, Advanced Threat Research, has said.

“Attackers can now create automated, targeted content to increase the probability that an individual or groups fall for a campaign. In this way, AI and ML can be combined to create massive chaos,” he says.

“We predict the ability of an untrained class to create deepfakes will enhance an increase in quantity of misinformation,” he says.

“Computers can rapidly process numerous biometrics of a face, and mathematically build or classify human features, among many other applications,” the McAfee executive says.

Ransomware threats

The coming year would see cyber criminals exploit their extortion victims even more moving forward.

“We predict targeted penetration of corporate networks will continue to grow and ultimately give way to two-stage extortion attacks. In the first stage cyber criminals will deliver a crippling ransomware attack, extorting victims to get their files back,” he says.

“In the second stage criminals will target the recovering ransomware victims again with an extortion attack, but this time they will threaten to disclose the sensitive data stolen before the ransomware attack,” he adds.

APIs vulnerable

McAfee said that Application Programming Interfaces (APIs), which allows two different applications interact with one another and complete the service requests, will continue to be targets for attacks. “Despite the fallout of large-scale breaches and ongoing threats, APIs often still reside outside of the application security infrastructure and are ignored by security processes and teams,” it points out.

“The increasing need and hurried pace of organisations adopting APIs for their applications in 2020 will expose API security as the weakest link leading to cloud-native threats, putting user privacy and data at risk until security strategies mature,” it says.

Published on December 05, 2019
This article is closed for comments.
Please Email the Editor