The best way to keep your data center cool is to use the cool, ambient air of your natural surroundings.
But if your company doesn’t have the luxury of locating the data center near the moderate Oregon coast, like Facebook, or near the fjords of Norway, the cost of cooling data can be significant.
Data center efficiency has become big business in recent years, with startups like Power Assure getting into the business along with power engineering giants IBM, General Electric and HP. Now there is one more company in the mix, SM Group International, an international engineering firm based in Montreal that is tackling the cooling methods with claimed savings up to 50 percent.
SMi Group’s patent-pending technology isn’t actually new technology at all, but rather a paradigm shift in how to go about lifting heat off of the computers. “We said, let’s try to remove equipment and simplify the equation,” said Jean-Simon Venne, head of the energy efficiency division.
The key to the system is to pressurize the space and blow area directly over the servers before the heat builds up. It also uses traditional strategies, like precooling. But in warm environments, like Miami or Singapore, precooling can only cut down on energy use about 10 percent of the time, said Venne. Data centers account for about 2 percent of electricity use in the U.S. and that figure is growing.
After SMi focused on optimizing precooling, the engineers asked themselves, “Why don’t we just put the computer in a wind tunnel instead of pushing cold air into a square room?”
The combination of moving air efficiently in a pressurized environment cut the energy use by up to 50 percent. But the even bigger savings is that the configuration of SMi’s data centers can cut capital costs by up to 40 percent. Because SMi has also cut down on the equipment needed overall in the data center, Venne said they could also cut the construction time by up to 25 percent.
Venne compared the pressure to somewhere between sea level and riding in an airplane. “Think of how dry an airplane is,” he said. “It’s that simple.”
The average data center has a power use effectiveness (PUE) ratio of about 2 to 3. Facebook’s latest Oregon data center sports a PUE of 1.07 and consumes 38 percent less power than average thanks to a new server design and DC power. GE’s ultra-efficient data center rates a score of 1.63.
SMi said its system can cut the PUE to as low as 1.1 for a new facility. Retrofits results would vary, but if a data center had a PUE of 2, they could probably bring it down to about 1.3.
Venne admitted that he was surprised that their design could be patented, since it was just based on the basic rules of thermodynamics. But with so many other companies focusing on power source (AC vs. DC) or optimizing how the servers run, no one had taken this approach.
The company is currently working with six universities, although Venne said that the number of data center projects coming across his desk is “stunning.”
Cooling, of course, is just one aspect of data center power. Other ideas for curbing power have included weather mapping inside data centers (Sentilla, SynapSense), application shifting (Power Assure), improved AC controls (Vigilent), switching from AC to DC power (ABB, GE, Nextek), swapping disks for flash memory (SandForce), better power conversion (Transphorm), and smaller, more energy efficient servers and chips (SeaMicro, Calxeda).
SMi’s technology also does not have to work by itself. If a data center is converted to DC and uses the cooling system, the savings will be even greater. Because of the lower capital expense and quicker construction time, Venne said his company’s technology can help to commoditize data centers. “It’s easy to do,” he said, “and it doesn’t cost that much.”