Social Media

In run-up to US elections, Facebook cracks down on deepfakes

Hemani Sheth Mumbai | Updated on January 07, 2020 Published on January 07, 2020

A Deepfake video or text can be weaponised to enhance information warfare

Facebook has announced new measures to eliminate deepfake videos from its platform, especially with recent concerns related to the upcoming US elections.

Facebook recently put out a detailed release on its website, highlighting its policies to counter deepfake videos i.e. doctored videos edited using AI to make them seem real.

Facebook’s strategy is to collaborate with over 50 experts from different backgrounds, including technical, policy, media, legal, civic and academic, to build strategies related to fake media detection and policy development to navigate the same.

“Our approach has several components, from investigating AI-generated content and deceptive behaviours like fake accounts, to partnering with academia, government and industry, to exposing people behind these efforts,” said Monika Bickert, Vice-President, Global Policy Management, in an announcement published on Facebook’s news section.

Crackdown on deepfakes

According to the announcement, Facebook and Facebook-owned Instagram will remove content if it is detected to be a deepfake. The criteria for removal is if the video has been edited or synthesized or is a product of artificial intelligence, where the content has been manipulated to make it seem real.

The ban is driven by concerns over the upcoming US elections, according to reports. With the recent political tensions across the globe, including the anti-CAA unrest and student brutality in India and the upcoming US elections, deepfakes are becoming an even bigger concern.

Facebook’s Deepfake Challenge

Facebook had earlier attempted to tackle deepfake issues by collaborating with start-ups and experts through its ‘Deepfake Challenge’ in September. The challenge was for developers across the globe to submit deepfake videos that could help Facebook build its own data-set based on these deepfakes, to improve detection of similar videos. Along with its partnering university, the company had pledged $10 million and had released over 5,000 videos to help developers with the data set.

Facebook chief technology officer, Mike Schroepfer, in a media report said, “The goal of the competition is to spur the construction of an AI system that can look at a video and determine whether it has been altered. Researchers and a couple of start-ups are working on this problem.

The report also mentioned the various tactics to recognise deepfakes, including out-of-place visuals and shadows.

Facebook has received a lot of flak in the past year for failing to take timely action against doctored videos, after multiple doctored videos went viral. These included a doctored video of House Speaker Nancy Pelosi. A doctored deepfake video of Mark Beiderbecke, created by Israel-based AI start-up Canny AI, also went viral on Instagram back in June, where Beiderbecke was pictured saying, “Imagine this for a second: One man, with total control of billions of people's stolen data, all their secrets, their lives, their futures.”

Facebook has also partnered with news agency, Reuters, to train newsrooms in identifying fake media through a free online training course.

Companies such as Amazon and Microsoft have also dedicated resources to identify and crack down on deepfakes.

Follow us on Telegram, Facebook, Twitter, Instagram, YouTube and Linkedin. You can also download our Android App or IOS App.

Published on January 07, 2020
This article is closed for comments.
Please Email the Editor