Green IT needs a lot of new performance metrics to measure its commitment to energy efficiency. In server terms, performance per watt is an increasingly important statistic for data centers that are maxing out their available electricity. Preferably, they’d like to measure it in real time, alongside the IT management and facility management platforms that keep the data center humming, to actually control IT assets with an eye toward saving on power bills, reacting to power emergencies or meeting green commitments.
Intel on Tuesday launched its latest iteration of green servers, the Xeon processor E5-2600 product family, geared to meet all of those needs. The new processors are 50 percent more efficient than the previous Xeon line, and connect to Intel’s Data Center Manager, which links its real-time power and thermal data to existing system management consoles.
Intel is actually on the defensive on the server performance-per-watt challenge. Last week, rival server processor maker Advanced Micro Devices (AMD) announced it was buying SeaMicro, maker of a new class of power-sipping server chips, for $334 million. With a customized chip that can pack 256 servers into the space of a car trunk, SeaMicro is breaking the mold that has led Intel to dominate the market for so long.
Intel custom-built a chip for SeaMicro last year, but with the startup now in AMD’s hands, that relationship seems unlikely to continue. After the SeaMicro acquisition, Intel said it was working on technology to boost its own chipset power performance. Meanwhile, HP and Dell have low-power, small-footprint microserver offerings of their own.
In the meantime, we’ve seen a host of new competitors in the server space using processors based on ARM’s low-power chip technology. Dominant in the smart home market, ARM licensees have been slower to take the technology to servers, though Nvidia turned to ARM last year for its latest generation of high-power graphics processors. ARM says it expects to see its technology in servers on the market by 2014.
The Wall Street Journal reported Friday that Calxeda (formerly Smooth-Stone), another super-efficient processor startup using ARM technology, may have been a target for AMD as well. GigaOm, which broke the SeaMicro acquisition story last week, speculates that an AMD-ARM licensing agreement might be coming.
It should be noted that new IT products also tend to be more efficient products every time they launch -- that's the nature of Moore’s law and other conceptions of progress in the field. But data centers need to know how much power their servers are using based on their real-world computing loads -- a watt-per-compute ratio, so to speak -- before they can start thinking about how to manage that metric more effectively. That applies whether or not they're running latest green server fleet or, more likely, a mishmash of old and new equipment.
Intel has its own data center manager, of course. So do HP, IBM, and Cisco and many others. On the facilities side, giants like Trane, Siemens, ABB, General Electric, Schneider Electric and others build complex networks and platforms for managing assets like chillers, fans, power delivery systems and other things that keep the building humming.
But getting these systems in synch with real-time data center power use is trickier than it sounds. Connecting them to real-time power meters or sensors is one step. Another is calculating power-per-performance at the server itself, though servers have traditionally skimped on accurate power metering technology, since it adds pennies for data that customers didn’t really want -- until now.
That’s led to a ton of startups targeting the data center server-facilities-energy nexus, including Power Assure, Sentilla, Vigilent and JouleX, to name a few. (Another, Viridity Software, was bought by Schneider Electric in December.) But behind all the ideas about how to manage servers for efficiency lies a fundamental principle: you’ve got to do it without interrupting what’s really at stake, which is what those servers are doing.
By the way, how do servers measure their efficiency? Intel’s new line is based on benchmarks called SPECpower ssj2008, an unwieldy title for the latest iteration of a server power measurement standard from the Standard Performance Evaluation Corporation (SPEC).
Unlike raw, data-center-wide ratios like power-use effectiveness (PUE), per-server metrics need to be more complex. Power Assure has come out with a rating called PAR4 that measures power use at different common server functions over time.
EPA also has an EnergyStar rating for servers, but it relies on published server specs, rather than real-time energy consumption data. The agency is promising to change that in its 2.0 version, however, and is turning to SPEC’s Server Efficiency Rating Tool (SERT) to do it.
Underlying all this work, of course, is the expectation of a massive buildout of cloud computing infrastructure to manage the promise of the “internet of things,” which could see 15 billion connected devices for 3 billion connected users by 2015, Intel predicts. That’s a 33-percent annual growth rate, and will add up to 4.8 zetabytes -- that’s 2 to the 70th power bytes, or about one thousand exabytes or a million terabytes -- per year by mid-decade.