New research, published on the official website of the Tallinn University, was conducted to explore the spread of disinformation relating to the Covid-19 pandemic on the internet, dubbed by some as the pandemic’s accompanying “infodemic,” and the societal reactions to this development across different countries and platforms.

The research was carried out to evaluate how national governments have responded to the task of providing a regulatory framework for online companies. It also sheds light on how these companies have transposed the obligation to protect human rights and combat hate speech online into their community standards.

Big-Tech

The research revealed that in almost all the surveyed responses, Facebook and YouTube belong to the top five media, accompanied mostly by Instagram and Twitter and sometimes by Pinterest and LinkedIn.

The search engine Google and the messaging service WhatsApp are mentioned less often, but if so, they rank first or second on the list of platforms.

While in some countries popular national and regional news sites were as popular as social media websites.

Said differentiation between (social media) platforms and messaging services appears to play an even more important role with regard to the spread of Corona-related (dis)information, the study added.

The study further said: “Facebook and, to a slightly lesser extent, Twitter, Instagram and other popular platforms are nearly always mentioned as a spreading medium, some replies explicitly point towards increasing importance of messaging apps in circulating Covid-related disinformation. One report explicitly mentions the increasing practice of “chain-messaging via Viber and WhatsApp platforms, with disinformation about various aspects of the pandemic.”

Other findings

The study took reference of Israel and Germany based studies and noted that “WhatsApp's groups are more dangerous in this time than public platforms such as Twitter as the spreader identity provides credibility to the message delivered.

The study also mentioned the reported counter-measures against such disinformation. This includes labeling potentially harmful, misleading information on Twitter; Covid-19-related content moderation rules on YouTube; a WHO chatbot on WhatsApp; and increased content moderation in cooperation with third-party fact-checkers on Facebook.

Those measures, however, are not country-specific and (apparently step-by-step20) applied without significant national differences, the thesis added.

The researchers revealed that only South Africa appears to be an exception here, as “misinformation is removed in response to public outrage or the possibility of criminal prosecution rather than any measures imposed by the social media platforms themselves.”

comment COMMENT NOW