The best data and the most elegant artificial intelligence models amount to nothing if humans do not believe the system is effective, fair and adaptable. The most valuable and most used AI systems instil trust as they operate.

Data and AI in India have grown exponentially in the last few years.

The Nasscom report, ‘Unlocking Value from Data and AI’, indicates that data and AI have the potential to add $450-500 billion to India’s GDP by 2025. Industries such as consumer packaged goods and retail, banking and insurance and agriculture could be responsible for close to 45 per cent of this value.

As per the Global and National AI Vibrancy Rankings of Stanford University Human-Centered Artificial Intelligence, India ranked the highest among 22 countries with respect to AI talent concentration in 2021. Surprisingly, it beat top developed countries such as the US, the UK, Germany and China to this position.

Beyond talent, India has shown strength in publishing research. The volume of AI-related journals has increased to more than 12,000, up from less than 5,000 in 2017.

All this speaks volumes about data and AI thriving in India. Yet, when it comes to implementing and scaling AI initiatives, companies — particularly start-ups — find themselves grappling with challenges. They range from concerns around not just trust, but also costs, privacy and security of data, to issues related to cultural acceptance, not to mention unclear privacy, security and ethical regulations.

The government of India is recognising the need for consistently strong and responsible AI practices to implement advanced AI. NITI Aayog, in its National Strategy on Artificial Intelligence (NSAI) paper, proposes the way forward. It identifies measures to accelerate the adoption of AI, develop research, promote reskilling, and facilitate responsible AI development. This strategic approach backed by the government could give companies the clarity and support they seek to solve issues including trust, and leverage AI solutions optimally.

Over time, robust AI risk management will be a substantial component of new enterprise risk management processes.

Trust is important for simple automation-grade AI, and even more critical for advanced AI, where calculations are too fast or complex for humans to quickly understand. Autonomous AI and self-training AI are at the core of the corporate dream of enterprise-wide AI systems.

With economic uncertainty looming, controlling costs is likely to continue to remain at the top of the list of corporate concerns. The challenge for companies focused on cost is to not neglect trust as they engage more with AI.

The writer is Executive Vice President – Global Head AI and Automation and ECS, Infosys