Automation, Artificial Intelligence (AI) and Machine Learning (ML) have significant consequences on privacy, which can vary depending on the specific use of application, be it for personal, business or monetisation, such as:

AI and ML are being used for rapid analysis of large data sets for extracting meaningful patterns and predictions, which may result in information manipulation as seen in the creation of fake content by Creative AI.

Another aspect of data monetisation involves selling insights from analysed data, including social media behaviour trends. This is where AI’s algorithmic decision-making may lack transparency, creating challenges in understanding how data is processed.

Data-centric approach

While handling massive personal data, we need to focus on a data-centric approach for PTM (privacy threat modelling) and PET (privacy enhancing technologies).

When we discuss security, the focus is often more on network/perimeter-centric aspects rather than being data-centric. Technology serves as one of the resources, which comes with its own set of consequences, limitations or advantages, depending on how we utilise it. AI, ML, and GenAI represent technologies that are engineered by humans, and the human brain possesses extraordinary capabilities.

Understanding “what to do?” simplifies the “how”. Early identification and resolution of privacy issues during planning and design are critical. Companies that prioritise transparency, trustworthiness, and security in AI implementation can create an environment that fosters more effective utilisation of advanced technologies, leading to enhanced performance and greater success.

It is required to shift focus from self-defence to self-empowerment. The traditional self-defence focus has shielded against external threats, emphasising zero trust and data security.

In the Shifted Focus approach, it’s vital to self-define data — embracing and leveraging personal data for self-discovery and empowerment through data democratisation. This means understanding oneself by analysing habits, preferences, behaviours, and empowering individuals, regardless of their technical expertise, to engage with data effortlessly. Instil confidence while using data for informed decisions, fostering a culture where all can enhance customer experiences with data-driven insights, balancing empowerment and privacy.

Transitioning from defence to self-defined data use emphasises proactive personal data utilisation for self-improvement and informed decisions, while still prioritising privacy and user autonomy in PET. The development and wider adoption of privacy-preserving AI techniques, such as federated learning, homomorphic encryption, and secure enclaves, are set to increase. These methods enable the training of AI models without compromising raw data, thereby bolstering privacy protections.

In our pursuit of data privacy compliance, fostering a culture of privacy is essential. Our adherence to data privacy principles should stem not just from the potential business impacts of non-compliance but from a commitment to balancing human rights with the rapid evolution of technology for ethical data innovation.

Looking forward, AI and data privacy shall entail navigating strict global regulations. Countries enact AI Acts to mitigate adverse effects, clarify ownership, and ensure transparency to build responsible AI platforms. Organisations and OEMs collaborate to fortify data governance, creating an environment that safeguards privacy and enhances data management practices for the future. Are we ready to leverage the positive aspects of technology to uplift and connect the world, thereby fostering innovation and contributing to a brighter global future?

Sehgal is Partner and Chaudhari is Associate Director, Deloitte India

comment COMMENT NOW