Facebook has disclosed that it removed nearly 30 million pieces of content across 10 different violation categories between May 15 and June 15. The disclosures are part of Facebook's compliance report under the new IT rules.

The content that has been removed includes violations related to hate speech, bullying and harassment, adult nudity and harassment, terrorist propaganda, and spam.

Related Stories
Facebook to update its Community Standards on handling satirical content
The decision followed recent recommendations by the company’s Oversight Board
 

“Over the years, we have consistently invested in technology, people and processes to further our agenda of keeping our users safe and secure online, and enabling them to express themselves freely on our platform. We use a combination of Artificial Intelligence, reports from our community and review by our teams to identify and review content against our policies. We’ll continue to add more information and build on these efforts towards transparency as we evolve this report,” said a Facebook spokesperson.

The is the first edition of the monthly India Report under the Intermediary Guidelines, 2021. This report provides metrics on how Facebook has enforced its policies through proactive monitoring and detection across Facebook and Instagram between May 15-June 15. More than 90 per cent of the content has been removed through Facebook's AI system, before any user reported it. "The rate at which we can proactively detect potentially violating content is high for some violations, meaning we find and flag most content before users do. This is especially true where we have been able to build machine learning technology that automatically identifies content that might violate our standards," Facebook said in its report.

Related Stories
“No question of not complying with Rules,” Twitter tells Delhi HC
Court issues notices on a plea alleging that the social media giant had failed to comply with Indian IT Rules
 

The social messaging platform added that given the global nature of its platforms, where content posted in one country may be viewed almost anywhere across the world, other ways to attribute the country of content removed in a technically feasible and repeatable manner, become almost meaningless. "So these estimates should be understood as directional best estimates of the metrics," it said.

Related Stories
Twitter still not compliant: Govt
Google, Facebook, Telegram, LinkedIn and WhatsApp have shared details with the Ministry
 

Facebook removed 25 million spam messages, 53,000 pieces of hate speech, 2.5 million violent and graphic content, 1.8 million adult and sexual activity among the 10 categories. On Instagram, 6.9 lakh pieces of information related to suicide, 53,000 hate speech, 5,800 terrorist propaganda material, are among the 9 categories of violation.

Recently, Google had disclosed that it had removed 59,350 pieces of content.

Gurshabad Grover, Senior Researcher at Centre for Internet and Society said, "we have been pushing for this type of accountability and transparency, therefore it is a step in a positive direction for these companies to be releasing this data. These numbers stress the importance of quantitative insight regarding how much content is removed, but it paves a way to ask questions regarding how systems are moderated. This would be of huge value to researchers and people to understand how these systems work"

Srinivas Kodali, Independent Researcher at Internet Movements in India, said public understanding of the scale at which the social media platforms have removed content, could trigger a myriad reactions, which could include users opting out of the platform or taking legal recourse for unfair removal of content.