to go for energy efficiency, as opposed to the more traditional approaches using high-power, high-performance CPUs.
Designed for a small number of threads or jobs done
very quickly, CPUs have a high energy overhead. From
NVIDIA’s perspective, important applications that
demand true high performance are parallel in nature
and are amenable to more throughput-oriented systems
designed to execute thousands or millions of tasks.
At NREL, architecture plays a key role in energy efficiency. In this case, the HPC system is using HP servers
based on Intel Xeon processors, including the 22 nm Ivy
Bridge architecture and Intel Xeon Phi coprocessors. Says
Stephen Wheat, general manager of HPC at Intel, “As
the optimized and tuned application is run in production,
the achieved performance per watt on both Xeon Phi and
Xeon processors has allowed achieving the results with the
lowest energy use.”
MADLY MOVING IN ALL DIRECTIONS
Of course, the push to achieve exascale computing
in the not-too-distant future is motivating a lot of very
smart people to devise radical new power and cooling
alternatives. The power budget of 50 megawatts for an
exascale system built on today’s technology platforms is
hardly acceptable. Unless, of course, you can afford your
own small nuclear power plant or build your data center
in locations where power is virtually unlimited — e.g.,
Niagara Falls, the Columbia River Gorge or the Bay of
Fundy. And then there’s Reykjavik, where you not only
have volcanoes spewing out geothermal power, but plenty
of ice to boot.
But, according to Keckler, there is no one big silver bullet that will allow us to efficiently power and cool today’s
and tomorrow’s HPC systems. Rather, it will be a collection of technological advances, some mundane, some quite
startling, including some Black Swans — those impossible
to predict events that have major repercussions.
For example, full server oil immersion is making inroads
— Intel, among others, has been experimenting with the
technology for several years. Intel also has been looking
into near-threshold voltage circuit design.
Three-dimensional architectures include fin FETS, a
15-year-old technology that allows you make smaller, more
densely packed, energy efficient transistors, which is being
revitalized by semiconductor manufacturers. In another
approach, called heterogeneous three-dimensional (3-D)
integration, multiple wafers, stacked vertically, have the
potential to consume less power and provide higher perfor-
mance than current two-dimensional chips.
Lawrence Berkeley National Laboratory, among others,
has been researching the use of carbon nanotubes for
microprocessor cooling applications.
Microsoft, Amazon, Google and others are deploying
highly energy-efficient modular computer containers to
support their cloud services efforts.
Superconducting circuits, chilled to a few degrees above
absolute zero, have the potential to be not only faster, but
far more energy efficient than conventional semiconductors.
The list goes on. With the emphasis on green computing
and the need to navigate the rocky road to exascale, some
of these solutions will take hold and evolve, others will
die on the vine and, most interestingly, mysterious Black
Swans are bound to arise.
Says Thomas Sterling, “It may be in the future that
we will see other completely alien forms of computing,
which may operate with a completely different relation-
ship between effective computation and energy and power
It should be an interesting decade, one in which power
and cooling, in all their myriad forms, take center stage in
the HPC data centers of the world.
Gertrude the Governess: or, Simple Seventeen:
John Kirkley, President, Kirkley Communications, is a
writer and editor specializing in HPC. He may be reached
The mechanical room that supports the NREL HPC data
center and ties its waste heat to the rest of the facility