The Bombay High Court’s split verdict on the constitutionality of the government’s proposed fact check unit (FCU) exemplifies the conundrum between countering the threat of misinformation and government involvement in fact checks. In 2023, FCUs emerged as the favoured policy intervention with governments in Karnataka, Tamil Nadu and Uttarakhand each citing the need for government intervention to control misinformation.

Safeguarding the integrity of civic discourse from manipulative disinformation campaigns is paramount, especially as India enters a pivotal election season. In principle, fact-checks can effectively counter false narratives that mislead users and cause real-world harm.

While social media platforms have long partnered with third-party fact-checkers to warn users of false information, the threat of ‘fake news’ has grown in scale and sophistication.

However, FCU proposals denote a novel trend, where governments seek to fact-check misleading narratives. This idea of governments emerging as official arbiters of truth is the subject of widespread scepticism.

As more governments pour already scarce resources into setting up their own FCUs, addressing systemic limitations becomes crucial.

Who watches the watchdog?

With easy access to generative AI technologies, information pollution is becoming more abundant, powerful, and deceptive. At the same time, a large share of ‘false information’ online is likely innocuous and often a form of satire or artistic expression.

FCUs face the daunting task of sifting through this digital haystack to hand-pick harmful narratives that deserve their attention.

This entails identifying information emerging from suspicious/inauthentic sources while analysing trends to look for harmful content. Justice Patel, who led the Bombay HC Bench, raised a concern about “how few things are immutably black- or-white, yes or no, true or false” which could lead to an untenable system of coercive censorship of alternative views by the government.

Government actors ultimately sway to political incentives, which skews their outlook on narrative selection. Consequently, government FCUs may disproportionately target content critical of the government, while ignoring falsehoods that support its outlook. For instance, government-run FCUs in Malaysia and Thailand conspicuously stayed away from narratives about controversial regime changes and protests. In Singapore, the Minister empowered to issue directions to counter ‘fake news’, overwhelmingly used the power to target dissenting voices.

Unsurprisingly, FCUs proposed by both the Centre and Tamil Nadu target misinformation about themselves. Other FCUs are less clear about what narratives they will prioritise and how these choices will be made. The public interest in scrutinising claims solely about the government was questioned by the Court.

Each proposed Indian FCU has a different structure, but all of them are designed to either label content as misleading, facilitate take down, or prosecute errant social media users. Owing to inherent conflicts of interest, fact checks by the state are prone to public distrust, as well as legal challenges arising from free speech concerns.

For example, not all fake posts warrant penalties, but once flagged as ‘false’ by FCUs, users posting such content face the real possibility of being subject to prosecution.

With instances of Indian police overriding legal safeguards to arrest users for innocuous social media content, citizens and journalists will be discouraged from online speech fearing FCU action.

This is precisely at issue in another case before the Madras High Court, where petitioners argue that the FCU will muzzle voices critical of the State government.

The structural conflict of interest resulting from government intervention necessitates institutional independence. For example, proposed FCUs should insulate editorial decisions from government influence, regularly publish transparency reports, and decentralise fact checking functions to numerous independent fact checkers.

Government efforts to counter misinformation would be far more effective if it instead focussed on enabling partnerships between social media platforms and a vibrant ecosystem of independent third-party checkers, rather than doing the fact checks themselves.

Ahuja is an Analyst, and Mohan is a Senior Analyst, at The Quantum Hub, a public policy firm