How about the future? Have we reached the pinnacle of power management?
Hardware and software are still evolving to be even more energy efficient. An example is the “tickless” OS. In the old days, OSs had to periodically wake up the processor (i.e., perform an interrupt) around a hundred times a second and check to see if anything needed to be done, such as task switching or handling incoming data from some device. OSs haven’t needed to do this for decades, but this legacy periodic “tick” has been part of every OS until the last few years. Every wake-up meant the processor was entering a runtime state, which can potentially prevent it from dropping into the lowest power C-states. The impact is that energy is unnecessarily wasted due to a requirement that no longer exists. Thankfully, most common OSs are now tickless to one extent or another.
As devices and application domains evolve, the pressure to conserve even more energy is very strong, not only for mobile devices but for huge data centers. Mobile devices have the effrontery to get smaller and smaller; data centers need to service more and more people with more and more data; applications keep putting greater demands on processing power; and consumers demand longer and longer battery life.
These trends have resulted in a nearly 3000-fold increase in the performance / power ratio+ over the last 30 to 35 years++. And the evolution of power management hasn’t stopped. Given the strong driving forces of data center and hand-held devices, I can imagine that tomorrow’s power management will eke out even more savings as well as minimize some of the negative situations that can prevent the effective adoption of power management in certain corner cases, e.g., cases where OS jitter can’t be tolerated and precise periodic interrupts are needed.
Can you think of anything that the processor and SW can do to save even more energy (using existing hardware)? Does the processor or OS do something that isn’t really necessary anymore? Does technology have a new, more power-efficient feature that existing software still doesn’t exploit? Are there power hotspots that should be looked at? Are there areas where the processor could save energy, but the cost trade-off (e.g., latency or reliability) is too great? Can the cost trade-off be mitigated allowing the processor to save more energy? These are some of the questions that very creative architects and engineers are asking in their pursuit of improving the performance / power ratio even further.
NEXT: ADDENDUM: A QUICK REVISIT OF THE INTEL® XEON PHI™ COPROCESSOR
+You cannot simply look at energy usage as it is a moving target: scales get smaller, silicon area gets bigger, new materials and gate technologies appear, etc.
++This estimate is derived for Intel® general-purpose processors only starting with the 80286. It is a very rough ballpark estimate obtained from general Internet sources.