Data is a new kind of capital: Oracle’s Senior Data Strategist

Jinoy Jose P Chennai | Updated on November 22, 2019



Oracle’s Senior Data Strategist Paul Sonderegger says Oracle recognises that data really is a new kind of capital, even though accounting rules may not allow for that in all cases and a vast majority of data value creation in the digital economy happens inside the same companies that create their own data. BusinessLine met Sonderegger at the recent Oracle's Cloud Summit in San Francisco. Excerpts:

How’s the data economy evolving and what are the new challenges in terms of data management?

What we see among our customers, especially among large enterprises, is that they’re starting to get the idea that data is a true asset, that it’s a kind of capital. What this means is data is a factor of production, an economic factor of production in digital goods and services. In fact, a couple years ago, The Economist called data the world’s most valuable asset, but it’s a strange asset, and one of the ways that it’s strange is that most data that gets produced never goes to market. Of course, some data does get bought and sold, and there are some really important privacy and security considerations around that practice.

But, most data never goes to market; it gets used inside the same firm that creates it. And, what this means is that the majority of the value creation from data in the data economy happens inside of the enterprise that creates it.

So, in each one of our customers, in each one of these companies, there is a hidden data economy, and there is a diverse supply of data coming from a growing number of applications, sensors, smart devices, and all of these things are creating small data assets, sometimes very large piles of data assets. So, that’s the supply side.

At the same time, on the demand side, there’s all kinds of latent demand for different business units, they constantly have new questions because they’re responding to new competitive threats on the outside but they don’t have any good way to express what kind of data products they wish they could get. So, what do you do in such case?

Well, the way that we think about this problem is that this is a hidden data economy, it’s hiding in plain sight inside each company. It’s not going to go away and companies are not all of a sudden going to start working as if they had these internal markets with actual pricing to provide signals.

Instead, what they can do is provide a market exchange of sorts — a data exchange inside each company that brings the transaction costs of getting the data you want into the shape you need to near zero. There are a couple of ideas that we need to talk about in there. Autonomous data management is the key to bringing down the transaction costs of getting data from its point of origin to its many points of use inside a large company.

How exactly is that done?

Autonomous Data Management now has a couple of responsibilities. One is to make the data assets that are available, that have been created, easier to find, easier to discover for analysts and data scientists, so they know what data assets were available to them in order to make it easier for them to, in fact, create new analytics, create new algorithmic services, and things like that. That’s one of the things that autonomous data management has to do.

One of the other things that autonomous data management has to do is reduce the time and effort on the part of these analysts and data scientists to turn the data assets that they uncover into the structure that he or she actually needs for that specific analytics use case and that specific algorithm.

But Autonomous Data Management also has to provide governance on this internal data market, so that the company knows exactly who is accessing this data. Also, has audit and log trails of what they’re doing with it, in what analytics and algorithms do these data observations participate, under what jurisdictions, where in the world are these analyses taking place? And also, it would be good if you could, if this Autonomous Data Management could make it easier for companies to let you know how they are using your data.

And that calls for more transparency...

Right. So that it’s more transparent to you what they’re actually doing with it, so that you could be comfortable that they’re using it and using it in effective ways or using it in ways that you’re okay with. How do we do this? The Autonomous Database is the first step.

Are all these things being done in real time?

It depends, because sometimes you want it to and sometimes you don’t; and the other part of the answer is sometimes, even when you want these analysis to be happening in real time, you don’t necessarily want the system to take an action automatically, sometimes you want it to raise an alert and give it to; and there’s a human in the loop. So sometimes you do want this kind of processing to happen automatically and you want some machine learning to happen automatically in real time. This is the case with fraud detection, for example.

So, what’s Oracle’s approach here?

We recognise that data really is a new kind of capital, even though accounting rules may not allow for that in all cases and a vast majority of data value creation in the digital economy happens inside the same companies that create their own data; and these internal data economies, inside each one of these enterprises, the transaction costs of getting data from its point of origin to its multiple points of views are too high.

An Autonomous Data management platform brings that down. To do that, we are creating the Autonomous Database, which simplifies and reduces the level of effort required to create new applications, and also simplifies and reduces the effort required to create new analytics and algorithms.

In future, how will this go forward from here?

There are three big impacts: the first big impact is an increase in data productivity, where, right now a lot of data assets are just not used. Autonomous Data management brings down the time, the cost, the effort to use these data assets, so a single dataset then gets used more for less.

The second big impact is an increase in tech labour productivity. Developing applications more quickly, developing algorithms and analytics more quickly, but also a dramatic reduction in the amount of time that IT specialists have to spend keeping this whole data tier performance.

And, the third big impact is in increasing data value; and the reason that we can make that claim is because reducing the time, the effort, the cost to use these data assets makes it easier to uncover and capture optimal value in your data, additional uses that had not been anticipated when the data was first created.

The interviewer visited San Francisco recently to attend a cloud computing conference at the invite of Oracle

Published on November 22, 2019

Follow us on Telegram, Facebook, Twitter, Instagram, YouTube and Linkedin. You can also download our Android App or IOS App.

This article is closed for comments.
Please Email the Editor

You May Also Like