In this podcast, businessline’s Senior Deputy Editor, K V Kurmanath, delves into the concept of “data poisoning” and its implications in the world of generative AI.
Data poisoning is likened to consuming contaminated food in a hotel that affects one’s health. Similarly, hackers inject false information into generative AI models to manipulate them. This compromised data can lead to various issues when users interact with these models, creating misinformation and errors.
The discussion highlights that data poisoning can range from harmless misinformation about topics like geography and currency conversion to more serious consequences, such as financial fraud detection systems failing to identify fraudulent transactions.
To combat data poisoning, it is recommended to check website authenticity, prioritise reliable sources like Google or Microsoft, and exercise caution when sharing personal information on unfamiliar websites.
There are also broader challenges posed by generative AI models. These models are initially trained to avoid sensitive or dangerous questions but may be manipulated to provide incorrect information. Striking the right balance between correcting users and preventing misinformation remains a challenge.
The podcast also touches on the issue of deepfake, emphasising the need for scepticism when encountering extraordinary claims and verifying the credibility of sources. Deepfake, which involves manipulating images and audio, can be used to spread false information and damage reputations.
In terms of policy, Kurmanath suggests implementing principles, creating a registry of AI providers, and encouraging government involvement in AI research. The government could collaborate with top institutions to develop its own generative AI models and establish monitoring bodies to curb the spread of deepfake.
Overall, the podcast underscores the need for vigilance, responsibility, and informed decision-making in navigating the world of generative AI and combating the challenges it presents.