In its first monthly compliance report under the new IT rules, WhatsApp on Thursday said it banned two million accounts linked to Indian mobile numbers between May 15 and June 15. More than 95 per cent of the banned accounts faced action for unauthorised use of automated or bulk messaging.

Globally, WhatsApp bans, on average, about 8 million accounts every month. The messaging company identifies these accounts without actually reading the content as they are encrypted. These accounts are tracked through AI tools and resources to prevent harmful behaviour on the platform

“We employ a team of engineers, data scientists, analysts, researchers, and experts in law enforcement, online safety, and technology developments to oversee these efforts. We enable users to block contacts and to report problematic content and contacts to us from inside the app. We pay close attention to user feedback and engage with specialists in stemming misinformation, promoting cybersecurity, and preserving election integrity,” WhatsApp said. “We are particularly focussed on prevention because we believe it is much better to stop harmful activity from happening in the first place than to detect it after harm has occurred,” it added.

Three-stage security

The abuse-detection operates at three stages of an account’s lifestyle: At registration; during messaging; and in response to negative feedback, which the platform receives in the form of user reports. WhatsApp said it received 375 grievances from users during the May-June period.

Grievances on FB

Separately, Facebook on Thursday said it received 646 reports through the Indian grievance channel. These complaints were related to accounts being hacked, fake profile, nudity or sexual posts and bullying. Facebook provided tools to the users to resolve their issues in 363 cases and took down 47 pieces of content.

Earlier, Facebook and Instagram along with Google, Twitter and Koo had released their compliance reports. Facebook has taken action against 30.5 million posts and Instagram actioned 2.03 million posts. Google removed 59,350 items between April and May, while Twitter removed 133 URLs.

‘A better overview’

Experts said these numbers stress the importance of quantitative insights on how much content is removed and pave the way to ask questions on how the systems are moderated. “With the monthly compliance report, we will have a better overview of the type and number of content pieces being taken down. Any transparency with respect to content take down will be good. There could be other issues with the IT rules, but the transparency on content is a good decision. Further, how it will play out, we will get to know with time,” Prasanth Sugathan, legal director,, told BusinessLine.

Other apps

Facebook and Instagram along with Google, Twitter, and Koo released their compliance reports earlier this month. Facebook took action against 30.5 million posts and Instagram actioned 2.03 million posts. Google removed 59,350 items between April and May without specifying categories, while Twitter removed 133 URLs. Of the 5,502 Koos or posts reported by the community, 22.7% (1,253) were removed, while other action was taken against the rest – 4,249.