Editorial

Algorithmic governance must be inclusive, adhere to privacy rights

| Updated on February 07, 2020 Published on February 07, 2020

India should use the recent judicial interventions in the Netherlands and Kenya for introspection before opting for digital solutions without checks and balances

Two recent judicial interventions, both from abroad, which upheld citizens’ right to privacy against the hurried implementation of algorithm-based governance tools, must act as a wake-up call for advocates of algorithmic governance in India. In a landmark ruling on Wednesday, the District Court of the Hague in the Netherlands asked authorities to halt immediately a digital tool that used predictive analysis to detect fraud in welfare distribution, citing that the algorithmic programme violated basic human rights. The Dutch court’s comments involve System Risk Indication or SyRI, which uses algorithmic prediction methods to track and spot individuals who can be potential welfare cheats. The tool is armed with big data analytics and uses individuals’ private data on consumption and other significant activities to rank them and create risk profiles that authorities can use in the detection of fraud. Such tools can be misused. For instance, SyRI allegedly helped Dutch authorities spy on people in poor neighbourhoods only based on algorithmic evidence and no other proof. The court found that SyRI impinges on the principles of transparency and held that technology should respect privacy.

In Kenya, a high court stopped the country’s controversial biometric ID programme, which resembles India’s Aadhaar, until new data protection laws come into force. In a massive drill last year, the government had collected from citizens sensitive data including fingerprints for the Huduma Namba programme, which triggered a heated debate in the country and beyond over data privacy and surveillance. The government had claimed its intent was to integrate all the data under one unique ID in order to enhance welfare distribution. But the court didn’t buy that logic and ruled that the fact that such large-scale personal data was made easily available digitally could pose risks to individuals. In India, too, fears have been voiced over the scope for distortions in using citizens’ biometric data in welfare distribution and fraud detection. Such automated digital tools are acknowledged to reflect human biases and prejudices, and can imperil welfare distribution. They could end up aggravating discrimination to new levels. Reports of mistargeting post-Aadhaar, in the case of Direct Benefit Transfer schemes, have been reported from central India. It is important to be cognisant of the pitfalls of digitisation to ensure that DBT’s benefits are not sacrificed in the bargain.

In 2018, the Supreme Court, while upholding the constitutional validity of Aadhaar, had asked the government to introduce strong data protection rules to ensure an individual’s right to privacy. India must use the global rulings as an opportunity for introspection before opting for digital solutions without checks and balances.

Published on February 07, 2020
  1. Comments will be moderated by The Hindu Business Line editorial team.
  2. Comments that are abusive, personal, incendiary or irrelevant cannot be published.
  3. Please write complete sentences. Do not type comments in all capital letters, or in all lower case letters, or using abbreviated text. (example: u cannot substitute for you, d is not 'the', n is not 'and').
  4. We may remove hyperlinks within comments.
  5. Please use a genuine email ID and provide your name, to avoid rejection.