How does fake news work in elections?

Anuj Kapoor/Akash Chaturvedi | Updated on October 22, 2020 Published on October 22, 2020

Research tools to measure fake news impact must be sharpened, while trying to contain this menace

With the Bihar legislative elections around the corner, talk of democratic elections being vulnerable to social media manipulation has grained traction. The Election Commission has repeatedly called on the social media giants like Facebook, Twitter to regulate election manipulation through social media.

Drawing a parallel to the 2016 US Presidential elections, majority of experts and citizens felt that Russia-sponsored content on social media had no say in the US presidential elections because Russian-linked spending and exposure to fake news were small scale.

On the other hand, a bipartisan Senate committee found that before and after the 2016 election, the Russian government used social media advertising to spread misinformation and conspiracy theories.

On similar lines, there has been talk of Facebook interfering with India’s electoral democracy. Regardless of the truth behind the claims, as a precautionary measure, a regulatory framework has to be in place to make Bihar elections 100 per cent manipulation proof.

The question is how to make this happen? Before suggesting any solution or preventive measures, we would like to briefly discuss what misinformation and fake news is and how it’s spread. Fake news, broadly termed as false news or misinformation disguised as credible news, is very persuasive and has serious consequences for democracy.

MIT research

We build on the pioneering research done by MIT researchers in the area of spread of misinformation and follow a four-pronged strategy to tackle the spread of misinformation through social media (particularly through social media ads) and therefore, contain its impact on Bihar legislative elections.

As suggested by the researchers, the very first step is to quantify the exposure i.e. list the ad impressions (count of the total number of times digital advertisements display on someone’s screen) of paid and organic manipulative content (e.g., flagged and deceptive content meant to misguide the voters).

To do so, advertisers need to evaluate: (1) Reach of the manipulation campaign (total number of people who have seen your ad or content), (2) analyse the targeting and personalisation strategies behind these advertising campaigns.

Put simply, it would help to know the deeper details of the advertising campaigns like text, image, and videos that go viral, and on which platforms, as well as the timing of the campaigns. Research suggests that predominantly three categories of images — (a) old images taken out of context and posted again, (b) photo shopped images, and (c) false quotes and statistics make up more than half of the misinformation on the public political groups on WhatsApp in India.

Further, advertisers need to understand social multiplier effects (how individuals influence each other) i.e. how many times, when and where the content is re-shared. One key point to note here is that content based hyper-localisation (offering content and ads in Indian languages) is booming and therefore, makes the tracing and containment of misinformation all the more difficult. The suggested second step is to supplement ad exposure data with data on voting behaviour. Data about voter turnout (e.g., registered voters’ names, addresses, party affiliations, and when they voted) should be matched to rich location data possessed by social media platforms like Facebook, Twitter and third-party brokers.

Researchers have previously used location based data (procured from third party firms like SafeGraph) to document racism in voting patterns with suggestive evidence that the residents of entirely-black neighbourhoods in the US had to wait 29 per cent longer to vote than the residents of entirely-white neighbourhoods. We can use similar data sources to complement the ad exposure data and study the misinformation diffusion pattern.

The third step requires measuring the effects of misinformation and fake advertising on citizens’ behaviour. One thing to keep in mind is that here the interest is in causal (and not just correlational) effects. Relying on correlation can lead to bias in the estimates as voters who are targeted with manipulative content are more likely to be sympathetic to it. While causation and correlation can exist at the same time and are used interchangeably, correlation does not necessarily mean causation. Causation occurs where action A causes outcome B.

On the other hand Action A relates to Action B (i.e. both are correlated) — but one event doesn’t necessarily cause the other event to happen. To gain unbiased causal estimates, we need to measure counterfactual i.e. similar voters exposed to varying misinformation levels, perhaps due to random chance or explicit randomisation by advertising channels or advertising firms themselves. In an Indian context, source of this random chance can be numerous macro and micro economic policies being implemented at a large scale.

In the fourth and the last step, results are aggregated (privacy concerns) and aggregate impact of changes in voting behaviour due to misinformation is measured on the election outcomes.

Working together

Social media companies routinely log what users are exposed to for research and retraining algorithms. To make this happen, governments, social media giants need to work together.

The Centre’s UIDAI which is the most sophisticated ID programme in the world has rich data on its 900 million eligible voters. Further, In December 2019, the government introduced the Personal Data Protection Bill in Parliament, which would create the first cross-sectoral legal framework for data protection in India.

This could mean lack of data in the hands of private firms and therefore, it may be difficult for firms to accurately quantify exposures for users who deleted their accounts or were exposed to content deleted by others.

We should recognise that well intentioned privacy regulations, though important, may also impede assessments like the one that we propose.

Overall, we think that spread of misinformation through political advertising is an important issue and a possible threat to democracy. To to tackle this problem, firms and governments (both federal and state level) need to collaborate and act fast.

Kapoor is an Assistant Professor (Marketing), IIM Ahmedabad, and Chaturvedi is a Software Engineer with Qualcomm, San Diego, USA.

Follow us on Telegram, Facebook, Twitter, Instagram, YouTube and Linkedin. You can also download our Android App or IOS App.

Published on October 22, 2020
This article is closed for comments.
Please Email the Editor