From farmlands to corporate offices to hospitals and beyond, AI is taking India by storm. In Telangana, for example, over 7,000 chilli farmers are using customised AI-based weather and crop advisories to optimise their agricultural yield. Even the quality of their produce is tested and certified through AI algorithms, to help them price their products better.

Meanwhile, in fields like education and medicine, AI has the potential to personalise the care and attention given to each individual. Imagine the implications of that in a country as large as India with people with different biochemistries, diets, and medical needs, as well as different academic interests and learning speeds.

Till now, a one-size-fits-all approach may have been the only practical option in medicine and education. After all, we have just one doctor for every 854 patients, and one teacher for every 24 students. But with AI, doctors could potentially customise treatments to each patient’s unique medical and lifestyle profile. Teachers could tweak educational curricula and objectives to suit each student’s learning needs.

The possibilities are tremendous. But so are the risks.

Mindful AI adoption

AI is still so nascent that we’re just beginning to understand the dangers it could unleash. What if a car’s AI-powered steering system malfunctioned on a busy road? Or, an AI algorithm recommended the wrong drug to a patient? Or, an AI sensor that maintains crop temperature and soil moisture was hit by a cyberattack?

With the sheer volumes of data AI is ingesting, the technology could eventually start thinking, creating and acting on its own. And that would send us down a slippery slope.

Being vigilant is key. In Salesforce’s latest iteration of generative AI snapshot research — titled ‘The Promises and Pitfalls of Generative AI at Work’ — the data revealed that global workers are forging ahead using generative AI, regardless of their company protocols. The study indicates that 84 per cent of Indian workers would consider inflating their generative AI skills to secure an opportunity.

Also, 53 per cent of Indian workers say their company does not have clearly defined policies for using generative AI at work. We don’t want to give away all our power to AI, or become so reliant on the technology that we’re no longer using our minds. The goal should always be to stay one step ahead — to be smarter than the AI.

Where do we start?

Today’s deep learning and generative AI models can process far more data than ever — a lot of it unstructured. What’s more, AI models have become a lot easier to query. The more data we feed them, the more they learn, and the more powerful they get.

Going by current trends, AI will soon find its way into everything — from doling out loans, to deciding whom to hire at a business, to even predicting who’s more likely to commit a crime.

Diversity in Data

Given how life-changing these decisions can be, it’s imperative that the AI models we use are foolproof. And that starts with ensuring that the data used to train AI isn’t biased, false, unsecured, toxic, or incomplete.

But who gets to decide if the data is good or bad? We all bring biases to the table which, if unchecked, can be reflected in the algorithms we write. A skin-cancer detection algorithm that’s been trained primarily on lighter-skinned individuals may not accurately detect cancer in darker skins. An AI recruitment tool that’s been taught that most people in a particular job are men, may only favour male applicants.

Hence, the need for a diversity of perspectives — both in the data that’s fed into the AI algorithms, and in the teams of people who work on them. This is especially important in India with its diverse cultures, cuisines, religions, and landscapes. If we aren’t accounting for all that diversity in the data we use to train AI, then we’re only going to exacerbate the inequities that exist.

Take language, for instance. India has 22 official languages and many more indigenous ones. But most of them aren’t adequately represented in AI models due to the lack of training data.

Non-profit start-ups like Karya are looking to change that by engaging native language speakers to create voice and text datasets that can then be used to build more linguistic-inclusive AI models. In the process, Karya’s workers, who hail from some of the poorest and most marginalised communities, have the chance to earn a supplementary income. They even receive royalties when the data they create is resold.

Stories like these demonstrate that an ethical and inclusive AI industry is indeed possible and within reach.

Humans in front, centre

AI has many potential uses. But let’s remember that its ultimate purpose is to assist humans — make us more efficient, augment our intelligence — not replace human supervision altogether.

Let’s also not underestimate the value of the human touch. We’re all social beings who rely on each other to survive and thrive. When Covid-19 took away that sense of connection with its lockdowns and social distancing, the psychological fall-out was huge. So, even as we use AI to automate more tasks, let’s not do that at the cost of human interactions — be it with our customers or our colleagues. AI should actually free us up for more human engagement, not less.

Generative AI hallucinations are a concern, but not necessarily a deal breaker. Design and work with this new technology, but keep your eyes wide open about the potential for mistakes. When you’ve used your sources of truth and questioned the work, you can go into your business dealings with more confidence.

Regulation will also be key in strengthening AI trust and accountability. So too will global collaboration.

AI isn’t limited by geographic boundaries. It can impact everyone everywhere. Only when countries work together to share their resources and knowledge, can we truly unlock the potential of AI, while keeping its risks in check, and building a positive digital future for all.

The writer is CEO & Chairperson, Salesforce India