It's easy to find a comfort zone and settle down in it; especially if you are a high achiever in your chosen area. But Prof Anand Sivasubramaniam says it's normal for him to change careers once in 4-5 years! “Not a job change, but a change in my area of interest,” he clarifies. Starting from research in the area of IT resource management in high-end computing, to his most recent interest, power management in high-end computing, he has seen more than a bit.

eWorld met him soon after he was recognised as the “2010 Distinguished Scientist” by the Association of Computer Machinery, which “honours ACM members with 15 years of professional experience with achievements of impact in the computing field.”

Prof Sivasubramaniam is now with the department of Computer Science and Engineering at Pennsylvania State University, an institution he has been with since finishing his Ph.D in 1995. He even has a reason why he chose Penn State: “Most universities offer either Computer Science or Computer Engineering (the latter sort of combines with Electrical Engineering). This is one place that offers a combination, which allows for research in both software and hardware.” A ‘nice blend' as he calls it.

He started off building a research programme in “something that hasn't been done before”, for, “otherwise, you don't get credit for your work!” He quickly runs through his work in the ‘90s saying, “We did work on large scale computing. We came up with interesting schedulers and collaborated with the likes of IBM and Unisys. Some of my work has even gone into a couple of companies' products.”

Power management

Around 1999, his interests shifted to power management for computing systems. Specifically, his research focused on matching performance of high computing equipment to power consumed. One of his papers succinctly captures the idea with its title, which partly runs: “From High Performance to Low Power...”.

Prof Sivasubramaniam gets more and more animated as he delves into this area. He says, “All these chip and circuit designers have a problem because of power — how much you push the envelope on performance is a consequence of how much power is required.” This, he says, is called thermal design constraint.

In simple terms, you cannot go on increasing performance and allow for a proportionate increase in power consumed. In low-end systems, it causes the system to heat and then inhibits performance. In high-end computers, the resultant performance may not justify cost of such power. He says, “That's one of the reasons why processors today don't talk about high speeds.” He recalls marketing campaigns, from five years ago, that talked of 500 mHz, 2Ghz or 4 Ghz. Now, the marketing game has changed. “It's all about multiple cores in a single chip.” Remember dual core and quad core?

Performance per watt

He explains, “You can't just keep pushing up clock frequency, you have to do something else.” So, if you have multiple cores inside a chip, each functions slower than normal. But together, they deliver a punch. Clearly, this is not for your everyday word processing and presentation needs, but for situations that see multiple participants. An enterprise servicing a mass consumer base would benefit. Business analytics and transaction-oriented processing for a bank, a retail chain or an airline, would benefit using such systems. “If you have a million transactions per second, it becomes easier to leverage these kinds of capabilities. So we now talk about performance per watt rather than pure performance. It becomes very interesting to push performance within a specific watt usage, instead of just making machines work faster and faster.

So, why did he look at power systems instead of another new area? “I realised that thermal constraints are going to make people ask new questions. Today, 2-5 per cent of carbon footprint across the world is from IT. It consumes that much electricity – in usage, maintenance and manufacturing.”

He elaborates, “If you buy a server for $2,000 and operate it for 4-5 years, you would spend as much in electricity to power the server as the cost of the server. As equipment costs go down, power cost is going up. Going forward, this would be the concern for most organisations – Microsoft, Google, Facebook being examples. They have huge data-centres each running 10,000-20,000 servers, and power determines the cost of their operations.”

There are three components to cost of power, according to him. The first is the direct consumption cost. Then comes the heating and air-conditioning cost. The third is the cost of provisioning, ie, $10-20 per watt to lay cables, provide transformers, UPS units and the like. “Data centres continuously require capacity expansion. You have to upgrade power infrastructure to meet capacity — that is the big issue.”

Powering storage

He was also one of the first few, he says, to write a paper on storage power. Think of it: Data centres run thousands of hard disks that are constantly spinning, just to store data...whether you actually use them or not. “These disks spin at 4500 revolutions per minute (rpm) or 7200 rpm or 10,000 rpm. At higher speeds, the power consumption goes up drastically.” He calls this the area of energy proportional computing. “I need to get a return on my investment in a data centre. I can only use a certain level of watts – and I should get performance proportional to that.”

He and his team were able to demonstrate dynamically modulated RPM, based on activity and periods of inactivity. So, the disks move between RPMs and only when you need data, the RPMs increase. This helps save power.

But, why can't you rest the discs and spin them only when you want to access data? Won't that make them highly power efficient? Prof Sivasubramaniam smiles the smile of the wise: “You can't make them rest by choice – the problem is, if you want to access data, it takes 15 seconds for them to start spinning. This delay — or latency — is a terrible issue for, say, banking transactions which need to complete almost immediately.”

Self-sustaining computing

The problem of power management is worse in developing countries than it is in developed parts of the world. India is an example of how growth of the economy presents key challenges. He reasons, “Power tariffs here are higher than in the developed world. Second, much of our weather contributes to cooling costs. Third, real-estate costs are a problem too.”

State governments in India, he says, restrict industrial use of power, the time at which you can use it and how much you can draw at that time. This stems from supply not meeting demand. So, the industry has to answer the question: Given these restrictions on electricity, what I can do to manage? You could schedule your work load: monthly payroll applications are an example. You can draw power for these at non-peak times.

He calls them the IT knobs and cites the example of the average laptop: the processor does not operate at one frequency. It does what is called frequency scaling — a range between a maximum and a minimum frequency. Depending on your activity, power consumed also changes.

Likewise, the IT knob for data centre operations can help in adjusting to power supply constraints. But isn't this intelligence built into data centres anyway? No, he says. “Look at the IT department – the chief here is in-charge of buying, installing, managing servers and applying packages. The department does not pay its own electricity bill. The facilities department does that. These two don't work hand in hand. There is no incentive for the IT admin to lower power usage. It's not part of his budget. And, the facilities chief can't tell him what to do.

Power capping

His other area of interest is in ‘power capping'. “We always worry about units of electricity we consume. Equally important is the peak power usage. I may use power at different parts of the day, but when and for how long I peak in usage has consequences in circuitry. The power infrastructure has to accommodate peak performance even though I don't use it often. Many commercial tariffs have peak power tariff built in.

“One solution we are now looking at is in energy storage for capping.” Most data centres have batteries lying unused for say, 99.9 per cent of the time. These are not back-up devices but are primarily used as fail-over devices upon electrical outage to transition to diesel generators. Now, could we leverage these batteries to provide power even when the grid is available, to lower the peak draw from the grid?

This is the context to his claim that electricity storage will become a big area going forward. He foresees vendors providing data centres in shipping containers! Truck it up and drive it anywhere. Rural areas having little or no grid power, is where stored power comes into play. “You can put it anywhere you want.”

>bharatk@thehindu.co.in

comment COMMENT NOW