With increased digital penetration, data has undoubtedly unlocked human potential to do a lot more — and efficiently, too. However, with more data comes a greater risk of misuse, often exemplified by leaks and the illicit selling of personal information. The discourse on safeguarding our data, including the discussion on the Personal Data Protection (PDP) Bill, emphasises the primacy of privacy policies and user consent as key bastions of defence.

Try recalling the last time you earnestly read through a verbose and jargon-laden privacy policy before consenting to share your data — but don’t beat yourself over being lax about it. Multiple studies have demonstrated that privacy policies and informed consent are broken. They suffer from three behaviour-linked problems.

First, the transparency/comprehension problem: The verbose legalese used in privacy policies is often incomprehensible to laypeople; this problem is further compounded by low digital literacy in India.

Second, the data repurposing problem: Entities do not overtly disclose all the additional purposes for which user data could be used, thereby resulting in ‘function creeps’. And, third, the consent fatigue problem: Users, by virtue of having to repeatedly consent to data sharing, are tired of doing so, thereby unwilling to expend the time and effort required to meaningfully consent.

An over-reliance on this approach has led to the prevalence of a binary ‘tick-the-box’ approach to data protection, rendering ‘informed consent’ perfunctory; while users have the choice of sharing their data, it is far from being a meaningful choice.

Some solutions posit that data-collecting entities should remain legally accountable for any breach or misuse of personal data regardless of whether they obtained consent. To give this approach some teeth, a set of inviolable ‘data rights’ are envisaged. However, the problem remains in implementing and enforcing such rights.

As it stands, India still does not have a data protection law, and such rights do not have legal grounding. Moreover, it can be difficult and time-consuming to prove infringements. For instance, if my data is used by AI and IoT for purposes other than what I consented to, how would I actually know? And if I somehow found out, will it be straightforward to mount a legal challenge? Moreover, by the time such a matter is adjudicated on, will any recourse offered be enough to offset the harm already done?

If we were to step back and take another look at the problem, we may be able to find some potential alternatives. Many of the core issues around data privacy are also behavioural in nature; users may wish to secure their data but their intention doesn’t always translate into action. So, by nudging human behaviour through better design principles we may be able to unlock human-centric design as a potential solution to better data privacy. By placing people rather than the service-contract at the centre of this relationship, we can enable better decision-making.

While designing privacy policies, for instance, UI/UX designers should be included at the very outset of the design process. Their inputs should be used to represent privacy policies visually — to show users how their data is going to be collected and utilised if they consent. Studies have shown that visually representing data flows — through short videos/animations — can make users more aware of what happens to their data when they consent, thereby reducing incomprehensibility and increasing transparency, while also tackling consent fatigue. This also has the added benefit of tackling limited literacy and linguistic diversity in a country like India.

Preconfigure preferences

Device makers and operating systems can also be encouraged to implement a ‘master privacy preference setting’ on user devices. Effectively, this will allow users to have a master control panel to preconfigure their data sharing preferences — where they can decide the frequency and type of data they are comfortable sharing in the normal course of online activity. And if a user’s master data sharing preferences do not meet the requirements of an app, they can either choose not to use it, or take time to specifically consent to its additional requirements.

On the supply side, such a structure would incentivise the app to minimise data collection or even provide a ‘Lite’ version of their app — with basic functionality requiring only essential data from users — to prevent large-scale user drop-off.

Businesses and other entities can also be incentivised to ethically and responsibly collect data by creating a government approved market of accrediting agencies. These accreditors can carry out assessments on an annual basis to evaluate privacy policies and other data collection practices on a range of metrics including data minimisation, purpose specificity, etc — to provide score-based certifications/star ratings.

A similar mechanism is also envisaged through the ‘Data Trust score’ in the PDP Bill. If implemented well, it can go a long way in addressing the shortcomings we see in the current context.

Privacy policies today remain complicated and inaccessible for many. There is a case to be made to behaviourally nudge users to invest more energy into comprehending and consenting to how their data is collected and used. Even as our lawmakers work towards devising a robust data protection law, we must also empower people and incentivise businesses to meaningfully safeguard privacy and autonomy in the digital realm — creating a win-win for all in the long term.

Rohit is a founding partner, and Krish is a public policy analyst, at The Quantum Hub, a public policy research and communications firm

comment COMMENT NOW