Ahead of 2019 elections, Facebook takes steps to curb fake news, hate speech

Venkatesh Ganesh Updated - October 31, 2018 at 09:20 PM.

It has updated guidelines and will verify authenticity

Facebook founder Mark Zuckerberg

 

With the upcoming 2019 elections, Facebook is taking a cue from other Asian countries, and introduced new features to curb inappropriate content including hate speech and fake news.

The social networking giant has laid out ground rules which specify that attack on or by public figures, which propagate hate speech will find no space in the world’s largest social network of 2.23 billion users. Facebook also said that it will ‘engage’ with third party fact-checkers (to verify the authenticity of the content). It has partnered with platforms like Boom Live, who will check the veracity of content.

Content Policy

Facebook will also increase such partnerships with companies and on-board more moderators, according to Sheen Handoo and Varun Reddy, Public Policy Managers, Facebook. Both are a part of the Content Policy team, responsible for the company’s global Community Standards, which set the bar for the kind of content to share on the platform. The Community Standards cover content on Facebook and Instagram but does not include WhatsApp, which is also widely used in India.

“So the algorithm will figure out based on the “caption” which is included with the forwarded message, whether the person is for or against hate speech,” said Handoo. However, the company clarified that if public figures put out hate speech messages directly, it will be taken down.

In India Facebook admits that if politicians put out hate speech in a nuanced manner (in public rallies), which gets reported by the media, it is a grey area and difficult to judge unless it is an outright hate speech.

Apart from hate speech, nudity, dangerous organisations, bullying and harassment, self-harm, sexual violence and exploitation, criminal activity, violence and graphic content are other areas where Facebook has updated its guidelines.

Reddy explained that the India playbook will involve a lot of lessons it learnt from political campaigns in Myanmar and Sri Lanka.

In Myanmar earlier this year, a UN investigator said that Facebook was used to incite violence and hatred against a Muslim minority group.

In India, WhatsApp has been used to spread fake news and incite violence, even lynching after which WhatsApp was forced to roll-out some features to prevent instigation of mob violence.

‘Dangerous organisations’

Reddy and Handoo said that Facebook has classified “Dangerous Organisations” as a category, which would include terrorist organisations, organised criminal activity, cartels or organisations using violence to further political hate.

They said that Facebook complies with the legal requirements of the Indian government. “If the government sends a valid request (which is based on someone in an authoritative position with a legal standing), it is then looked into,” they said.

Published on October 31, 2018 15:49