Today’s AI models are not reliable: Dell CTO

Varun Aggarwal Mumbai | Updated on November 15, 2019

Patricia Florissi, Vice-President & Global CTO, Sales, Dell Technologies   -  Photo by

They often have biases embedded in them because of the way they are designed

Patricia Florissi, Vice-President & Global CTO, Sales, Dell Technologies is a woman on a mission to remove biases from artificial intelligence models, trying to ensure that the lack of data and datasets do not result in biased predictions of AI programs. In a conversation with BusinessLine, Florissi talks about the challenges that AI faces and what needs to be done to address AI-related issues

Edited excerpts:

What kind of innovations are coming out of the India R&D centre?

India is the fastest growing innovation market. . I think India is leading on the issue of unique identity card;on how you can actually get passports delivered in three days versus months. We are seeing how India is transforming digitally, how it is actually using digital transformation as a major lever, especially with a population of 1.3 billion. India is coming up with software solutions that are being bought and acquired by major companies out there. And you definitely have the training, right, the academic training for computer science as a whole.

Did you know that India is the only country other than the US where Dell has manufacturing; where it has R&D; where it has Centre of Excellence, and a multi-faceted presence? And we are seeing innovation coming throughout our product line. We have thousands of R&D members here in India, and I cannot point any particular innovation because it is happening throughout the production line.

AI often has biases embedded in it because of the way it is designed. How do you make sure that we are not getting into an automated world which is taking us in the opposite direction?

That is actually my focus area of research, and that’s the whole idea of how you can actually do AI at scale. AI started in the 50s and is only now coming of age as I call it, and it took 70 years to get there. And one of the reasons is that AI required a confluence of aspects. One is the sophistication of the algorithms — we are talking of deep learning and effect of that is deepest because of the number of layers in the network, in the neural network, and that is between 120 and 200 layers that are stitched together.

If you look at the difficulty for data scientists to actually bring that together, it would require tools, development environment, way of viewing and abstracting the composition of the algorithms. And only now we are achieving that level of maturity from a framework development support, and so on.

The second element is that AI requires a tremendous amount of data to train the model; people talk about data in volume and I talk about data in terms of posture.

However, at the same time, you have privacy, regulatory, bandwidth constraints that are impeding the data to come to a central location or to a region where you can actually train all the models in a unified way.

How does that solve the issue of biases?

Because the more data you include, the less biased you are. The bias in AI comes from two dimensions, the first is the algorithm. You can have an inherently biased model in AI. The second one is you train your model against the very biased dataset.

And the diversity of the data will only come when you take into consideration geographically-dispersed data sets. If you don’t find a mechanism to actually analyse data in a dispersed, restricted manner, you are going to have a huge hindrance on how diverse the problem of bias is actually is.

As we are still in the maturity phase, how reliable are today’s AI models in terms of being neutral and not being biased?

They are not (neutral). And can I ask you a question. How safe and secure are the apps that you actually use on your mobile as also the information that you are sharing and posting through Facebook, Twitter, LinkedIn, all of it, all of the social media? They are not; yet, has that stopped the major population from doing so? Absolutely not. Because you cannot; the population cannot wait until the technology is mature, safe, secure, privacy-preserving, compliant, protected to use, so you have that tension which is actually sometimes a necessary one to drive human progress.

Technology will go a little bit far, then the regulation, and that’s what happening in AI. I may have a too practical or too Brazilian view of the world, but that’s what life is, it is not perfect.

What keeps you awake at night?

I think what keeps me awake at night is, how can I look back 10 years from now and say I was part of it and I didn’t participate in a meaningful way. We are, you and I, part of a huge moment in history. What have we done? You know I always end my presentations, all of them for the last three years with one thing.

There are three types of people on earth; people that make it happen, people that watch it happen and people who do not know what have happened. Which one am I? Alright, which one are you? 10 years from now wake up and I say ‘oh my God I was in the third category! Will I be able to forgive myself?’ Will you?

Published on November 15, 2019

Follow us on Telegram, Facebook, Twitter, Instagram, YouTube and Linkedin. You can also download our Android App or IOS App.

This article is closed for comments.
Please Email the Editor

You May Also Like