High-performance data centres are becoming a requirement for many types of companies, avers Prakash Sripathy, Vice-President - SW Engineering & India Operations, Force10 Networks, Inc., US (http://bit.ly/F4TForce10).

?Financial services firms need high-performance data centres to rapidly execute transactions. Web portals need high- performance data centres to provide a robust customer experience. Hosting companies and public cloud companies need high performance to deliver performance that is as good as it would be for customers running applications locally,? explains Prakash, during the course of a recent interaction with Business Line .

Enterprises of all kinds increasingly rely on high-performance data centres as they move to virtualised and converged environments, he adds. Our conversation continues over e-mail.

Excerpts from the interview.

What have been the key milestones in the evolution of the data centre market?

Four key milestones impacting the evolution of the data centre market are virtualisation, network convergence, automated networking, and cloud computing.

Historically, data centre managers have implemented each application on a separate server. This leads to very poor server utilisation (as low as 15 per cent), and server sprawl that rapidly increases capital expenses, space, and power costs.

Virtualisation allows data centre managers to create virtual machines (VMs) and run multiple applications on a server, allowing server consolidation, and increasing server efficiency. However, virtualisation demands a network infrastructure that is very fast, flexible, and automated enough to meet the constant traffic changes present in a virtualised environment.

Data centre managers are also working to unify their server and storage networks. Previously, they have had to implement and maintain two separate networks, one for servers and one for storage. Using protocols such as Fibre Channel over Ethernet (FCoE), data centre managers are converging on a single Ethernet network. Convergence demands switches that can support Ethernet and FCoE with equal ease, and which deliver the performance and flexibility to work well in a dynamic traffic environment.

In order to keep up with dynamic changes in servers and storage, the network itself must be highly automated. As the data centre manager creates new virtual machines, for example, the network switching infrastructure must provision connectivity for that VM as quickly and with as little human intervention as possible. Otherwise, server changes can take hours or days to fully implement.

Cloud computing represents a return to centralised provisioning of applications and data. Data centres followed this model in the 60s and 70s with mainframe and minicomputers, but the rise of the personal computer led to the client/server computing model in the 80s and 90s.

Now, the trend is toward consolidating all application and data resources in a cloud, which simplifies the overall infrastructure and makes it easier to maintain applications. Cloud computing requires extremely high performance in the data centre network, servers, and storage to minimise latency in serving client needs.

What are the factors that CTO/CIOs mainly consider when making data centre decisions?

Decision-makers first consider performance and functionality when choosing networking technology. The technology must be based on open standards, and it must deliver the highest levels of performance with simplified management. Specific criteria for data centre switches include latency, backplane capacity, automation capabilities, protocol support, port density, and power consumption.

Here are key factors to consider:

Flat Layer 2

Owing to the advent of virtualisation and the need for mobility of virtual machines, the need for Layer 2 connectivity is expected across a larger set of servers than in the past. Also, the link utilisation inefficiencies of spanning tree protocol are no longer acceptable in the data centre. TRILL (transparent interconnection of lots of links) and SPB (Shortest Path Bridging) are emerging technologies specified to solve the flat Layer 2 and efficient link utilisation requirements.

Power efficiencies

Network equipment would have functions similar to laptops to reduce power consumption. For example, equipment would turn off inactive interfaces or, at minimum, reduce the power consumption by using dynamic power management techniques and also by using EEE (Energy Efficiency Ethernet) standards and beyond. Reduced power consumption improves the total of cost of ownership.

Network automation

Over the past couple of years, as cloud computing has developed, most of the focus has been on how servers and storage devices support cloud computing. Today, CTOs/CIOs are learning how critical it is for the network to be part of the cloud computing infrastructure as well. Among the biggest issues CTOs/CIOs are realising with virtualisation environments in data centres is the added complexity.

In the past, CTOs/CIOs were dealing with server sprawl. Although virtualisation technology has helped data centre customers address the server sprawl issue, they are now experiencing similar problems with VM (virtual machine) sprawl. This proliferation of VMs is adding significant complexity to managing data centres.

Consequently, customers need to look for ways to mitigate this complexity. The best way to do that is with automation. By automating common functions, CTOs/CIOs can manage the additional complexity caused by virtualisation. For example, Force10's Open Automation Framework provides mechanisms to automate network functions, dramatically reducing the complexity of managing virtualised data centres.

Convergence

Another other major trends CTOs/CIOs should consider in data centre networking is fabric convergence. IT managers want to eliminate separate networks for storage and servers. With fabric convergence they can reduce management overhead and save on equipment, cabling, space and power.

Three interrelated protocols that enable convergence are Fibre Channel over Ethernet (FCoE), Ethernet itself (which is being enhanced with Data Centre Bridging, DCB), and 40/100GB Ethernet.

FCoE

Storage administrators initially gravitated to Fibre Channel as a storage networking protocol, because it is inherently loss-less, as storage traffic can't tolerate any loss in transmission. FCoE encapsulates Fibre Channel traffic onto Ethernet and allows administrators to run storage and server traffic on the same converged Ethernet fabric. FCoE allows network planners to retain their existing FCoE controllers and storage devices while migrating to a converged Ethernet network for transport. This eliminates the need to maintain two entirely separate networks.

DCB

DCB comes into the picture because it enhances Ethernet to make it a lossless protocol, which is required for it to carry FCoE. A combination of FCoE and DCB standards will have to be implemented both in converged NICs and in data centre switch ASICs before FCoE is ready to serve as a fully functional standards-based extension and migration path for Fibre Channel SANs in high performance data centres.

40/100GB Ethernet

Another advancement being driven by the rise in server and storage traffic on the converged network is the move to 40 GB and 100 GB Ethernet. With on-board 10GbE ports expected to be available on servers in the near future, ToR switches need 40GB Ethernet uplinks or they may become network bottlenecks.

Would you like to discuss some of the significant success achieved by your data centre customers?

Network Redux, a hosting provider in Portland, Oregon, is using Force10 S-Series top-of-rack switches to scale up Layer 2/Layer 3 capacity in its data centre. Force10 switches support protocols such as OSPF and VRRP to simplify data centre management and optimise networking performance.

Terenine, a cloud-based data centre provider in Chattanooga, Tennessee, is using Force10 E-Series switches to deliver up to 3.5 Tbps of backplane capacity for its fully virtualised server environment.

ReadySpace, a hosting provider in Singapore, is using Force10 switches to implement a dynamic, automated network that can quickly deliver capacity as needed for customer requirements.

Looking forward, where do you see data centre technology heading?

We see data centre technology continuing to evolve to support cloud computing initiatives. This means that data centres will continue down the paths of virtualisation and convergence, and that network automation will become critical. New automated switches eliminate the need to manually migrate port profiles and VMs, for example. Data centres will continue to become more dynamic, with applications and resources changing all the time. This requires dynamic network infrastructure to eliminate the bottlenecks of manual provisioning.

comment COMMENT NOW