It’s often said that engineering is the compromise between dreams and reality. Indeed, the process of balancing the competing factors of high computing performance, low power consumption, cool operation, and light weight is a perfect example of the engineering tradeoffs that must be considered with every design.
Not long ago, low-power computing automatically assumed low-performance designs. This no longer holds true today. The mobile-driven industry focus on power efficiency and form-factor optimization is producing results that seem to upset this long-held presumption.
In the process, some of the old engineering compromises are being significantly altered, ushering in a new era of energy-efficient computing. Recent developments point to exciting improvements in energy efficiency -- just in time to meet the immense energy demands expected from the explosion in mobile devices and the developing “internet of things.”
What we’re witnessing is a reaction to expanding power demand and overburdened energy supply. Our computing future is pitting the enormous growth of devices and information against rising energy costs and an already overtaxed energy infrastructure. A substantial part of the solution consists of increasing the energy efficiency of our computing devices at a rate similar to the impressive microprocessor performance gains achieved during the past three decades of PC computing.
Key efficiency factors: Integration, innovation, power management
Today’s mobile technologies utilize breakthroughs that squeeze more powerful performance into ever-smaller devices, boost battery life, reduce weight and heat, and virtually eliminate the need for noisy cooling fans. Many power efficiency gains are a result of rejecting the old PC-centric power management model of "the bigger, the better" in favor of a new, holistic approach that integrates power management functions across the entire system. This new approach has given rise to advances in system integration, processing innovation and idle power management.
System integration approaches include heterogeneous system architectures (HSAs) that combine CPUs and GPUs onto the same piece of silicon, often called accelerated processing units, or APUs. This reduces typical use energy consumption by eliminating power-robbing interfaces between chips and enables on-chip management tools to efficiently allocate and reduce power between integrated components.
Devices powered by HSA help boost processing efficiency by selecting and routing workloads to the best processor for the job, enabling performance increases for both standard office and emerging “visual computing” and natural user interface applications. In short, HSA devices are designed to deliver improved performance capabilities while using less power.
Another new and promising area of power management is centered on low-power or idle states. Managing idle states is ideally suited for mobile devices as they require high performance for short periods, followed by relatively long idle intervals while the user awaits a response or observes a result. The key is to architect a design with very low-power idle states, allowing the device to enter those states as quickly as possible and creating a flexible chip configuration so there is a low-power configuration that can always be entered into quickly.
With these innovations, power efficiency is no longer the ratio of peak performance divided by peak computation power (i.e., peak-use efficiency), but rather, peak performance divided by typical computation power. The performance is delivered for short bursts of time to give the user the fast response desired, while the power is usually much lower than peak power due to the aforementioned aggressive use of low power states.
This evaluation of efficiency is termed “typical-use energy efficiency” and is improving at a much faster rate than peak-use efficiency due to both architecture and power management innovation.
Designing for performance-per-watt efficiency
Computing performance is commonly assessed using industry-standard measures including FLOPS, MIPS, and other popular benchmark scores. Measuring the performance piece of the power efficiency equation can be more difficult because it is a continuously moving target, with continuous changes as the system manages different workloads.
Performance-per-watt measures the computation delivered by a device for every watt of power consumed. As discussed, advanced power management enables efficiency to be determined more by the watts consumed in the low power states than by the watts consumed during brief peak computation periods. This measure is now commonly used to assess the efficiency of multipurpose platforms, including mobile phones, tablets, laptops and embedded devices.
The challenge is reversing the performance-only mindset of the previous three decades of computing, as well as designing the next generation of devices for optimal performance-per-watt efficiency. A related challenge is identifying the most representative performance benchmark to reflect the typical energy usage of today’s mobile devices.
High performance coupled with high power consumption is no longer an option for many devices. This is especially relevant for the power requirements of billions of new computing devices that are expected to make up the internet of things. Likewise, users will not accept devices that deliver low power consumption but with low performance, presenting a considerable challenge for designers.
The global power-efficiency imperative
The processors used in many of today’s computing devices are roughly ten times more power efficient than those of only six years ago, and by the year 2020, typical-use energy efficiency for the best of them is expected to increase by a factor of 25. This is good news for computer users, the computer industry and the planet.
With billions of new connected devices expected by the end of the decade, each drawing its share of power from a strained global energy grid, low-power designs will be a top priority for years to come. Fortunately, advances with APU microprocessors, SoC integration, intelligent power management and compute performance delivered by next-generation heterogeneous computing all help to address the global energy-efficiency imperative.
Our computing future need not be a compromise. Innovative designers are developing devices that simultaneously achieve both high performance and low power consumption. There is a bright future for innovations that advance technology performance while improving energy efficiency.
***
Sasa Marinkovic is the head of technology solutions for AMD, a manufacturer of microprocessors and graphic processors.