Technology giants Google, WhatsApp and Facebook came out with their monthly compliance report under the new IT Rules for July on Tuesday.

Google actioned 5,76,892 content pieces through automated detection last month as compared to 33.3 million pieces taken down by Facebook for the same month. WhatsApp said it had banned over 3 million accounts between 16 June and 31 July compared to 2 million in May-June.

This is Google’s third monthly report and the number of actioned content pieces went up significantly against 83,613 content pieces removed in June and 71,132 in May.

Between July 1-31, Google received a total of 36,934 complaints from individual users and actioned around 95,680 content pieces across nine categories.

“A single complaint may specify multiple items that potentially relate to the same or different pieces of content. While the previous section of this report provides information on the volume of complaints received, this section summarises the volume of removal actions taken on items contained in complaints received,” Google explained in its report referring to the higher actioned content number.

Content categories

Across its nine content categories, Google actioned 94,862 (99.1 per cent) content pieces for ‘copyright’, 807 (0.8 per cent) for ‘trademark’, 4 (0.0 per cent) for ‘court order’, 3 for ‘counterfeit’, one for ‘graphic sexual content’, one for ‘impersonation’, one for other ‘legal issues’ and one for ‘defamation’, respectively.

In contrast, of Facebook’s 33.3 million actioned content pieces between June 16 to July 31, 2.6 million and 3.5 million pieces of content actioned came from ‘adult nudity and sexual activity’ and ‘violent and graphic content’ categories respectively.

Facebook’s automated proactive action rate across 10 categories stood between 99.9 per cent to 42.3 per cent. The highest being for around 25.6 million cases of ‘spam’ with a proactive action rate of 99.9 per cent and the lowest being for 1,23,400 ‘bullying and harassment’ cases. Other categories with higher cases include ‘suicide and self-injury’ which had 9,45,600 cases.

The numbers are slightly higher over segments as compared to last month’s 30 million pieces of content removed by Facebook.

“Over the years, we have consistently invested in technology, people and processes to further our agenda of keeping our users safe and secure online and enable them to express themselves freely on our platform. We use a combination of artificial intelligence, reports from our community and review by our teams to identify and review content against our policies. In accordance with the IT Rules, we’ve published our second monthly compliance report for the period for 46 days – 16 June to 31 July. This report will contain details of the content that we have removed proactively using our automated tools and details of user complaints received and action taken,” a Facebook spokesperson said.

Facebook’s other platforms

Facebook’s subsidiaries Instagram and WhatsApp also came out with their reports on the same day. Instagram reported 2.86 million actioned content as compared to 2.03 million actioned reported in the previous report.

For Instagram, 1.1 million pieces of information actioned were related to ‘violent and graphic content’, 8,11,000 on ‘suicide’, 6,76,100 on ‘sexual content and nudity’, 56,200 ‘hate speech’, and 1,95,100 on ‘bullying and harassment’ among the nine categories.

“Given that such violations are also highly adversarial, country-level data may be less reliable. For example, bad actors may often try to avoid detection by our systems by masking the country they are coming from. While our enforcement systems are global and will try to account for such behaviour, this makes it very difficult to attribute and report the accounts or content by producer country (where the person who posted content was located),” Facebook said in the report.

comment COMMENT NOW