Artificial Intelligence is the new buzzword for most companies but when it comes to actual adoption the projects are stuck in the pilot stage. Bias, lack of expertise, ethical concerns, and lack of data provenance continue to be impediments to enterprise LLM adoption.

In an interview with businessline, Ritika Gunnar, General Manager, Data and AI, IBM discussed how the tech major is handholding clients through these challenges.

Q

Almost all companies talk about AI but when it comes to actual adoption, many enterprises are still at the pilot stage. What are the reasons for this slow adoption of AI?

We have what I call a thousand flowers blooming of AI, yet when it comes to putting it into production, we are still seeing that crossing that chasm can be difficult. Having trust and transparency in AI is the number one reason.

It’s important to make sure the models that you have are really robust enough and that you are protecting against adversarial attacks of those models. The second thing that we see is really that bigger is not always better in terms of models. We see a number of organizations where when they extrapolate the cost of what it means to take something that they’ve done in a pilot to full scale deployment, its cost prohibitive in nature.

And so from our perspective, this is one of the areas where IBM fundamentally believes that smaller language models are actually better. And we can actually get better performance for particular use cases on small language models versus large language models. Fit-for-purpose will give you better outcomes. And that’s why we see a lot of proof of concepts happening with large language models. But as things go into production, we see small language models becoming more important.

Q

Where do you see AI adoption happening the most?

We see AI being used for cost savings, for productivity, and for transformational use cases. The one where we see the most traction right now from most organizations is really about productivity and efficiency. One of the most common use cases would be customer care.

The second one is what I would call a digital workforce. In most organizations there are heavily used laborious tasks that can be automated, using generative AI. Now, the technology underneath will continue to evolve, mature, become more accurate, and become more productive, but we know those use cases are a great starting point for most organizations and it will get even better over time.

Q

Many companies are still not comfortable spending money on AI due to the costs involved and the uncertain outcome because this tech is still evolving.

Our advice to our clients is always to start small, and start on something where you can see outcomes. Because as you begin working on use cases for generative AI, there are three areas where an organization needs to change. One is the technology in itself.

The second is the processes. Because as you’re embedding generative AI into the organization, effectively what you’re doing is democratizing access of AI to the rest of the organization. Traditionally, AI was only isolated to the data scientist. With generative AI is now you’re democratizing it so that every app developer is now an AI app developer.

So there’s a technology aspect, a process aspect, and a skills aspect for the people. And so our advice is always start small because you’re going to need to learn how to change all three. And then from there, you can start understanding even other impactful cases across the organization.

Q

How do you see India as a market and as a hub for AI related development for IBM?

So the potential is quite large. And even from our perspective, we are investing quite heavily in our teams in India as well. We believe India has a great developer ecosystem where we have invested quite a bit in having our own development resources as well as the broader community of developers.

Q

When do you see AI combining with Quantum Computing at scale?

AI in itself is a transformative technology. And we’re seeing and reaping the benefits of what it can do to make us more productive through transformative use cases. So is Quantum. Our research teams have been working quite heavily on what it means to have that intersection.

Of course, you know, that horizon of being able to mature, not just the technologies of AI, but the applications, the languages that need to be on there takes time to harvest. So I don’t have an exact timeline for that, but it’s something that is absolutely a priority of what we’re working on, even within our research teams,

Q

What is the number one challenge for you today

Speed of innovation is key. The market’s moving extremely quickly. We have released over seven products in AI, and 50 features in AI, in just over a year. And we’re really accelerating what it means to cross the chasm from those pilots to production. Our goal is to be able to help enterprises, whether that be mid-market or large organizations, infuse AI across the most essential systems.

Q

What can we expect to see from IBM in the near future?

There’s a lot of technology that we’re working on that you’re going to hear things from us in the future. AI Agents is one of them. The second is AI middleware. If you really think about what it means for all of this AI to be in the world, there’s probably over a billion new applications that are going to be created, assisted by generative AI.

You need to be able to not only build them using generative AI, but you need to be able to run and manage them. This notion of AI middleware is going to be really important. The third one is domain specificity. A lot of the growth is actually going to happen in how you can help with particular domains. So we within IBM as client zero are already using AI in things like HR and our sales organizations within IT.

Being able to assist and accelerate in particular domains is one of the things that we’re working on quite a bit.