Observing year one of the Intel Parallel Computing Centers
A Focus on
One year ago, recognizing a rapidly emerging challenge facing the HPC community, Intel launched the Parallel
Computing Centers (IPCC) program.
With the great majority of the world’s
technical HPC computing challenges
being handled by systems based on Intel
architecture, the company was keenly
aware of the growing need to modernize a
large portfolio of public domain scientific
applications, to prepare these critically
important codes for multi-core and manycore environments.
The IPCC program was received by the
HPC community with a strongly positive
response. Now at the one year anniver-
sary of this important program, we’ll take
a closer look at its widespread growth,
success and impact it is already having
throughout the global HPC ecosystem.
The IPCC program was designed to
focus resources and expertise on modernizing some of the most widely used
applications that have not been updated
in recent years for parallel architectures,
leaving them unable to leverage advances
in massively parallel systems. Code that has
not been updated hampers performance
and inhibits discovery work in such critically important areas as climate modeling,
energy research, national security, genomics
and health care research and new product
design, to name a few.
While code modernization is not specific
to any particular platforms or architectures, the term is becoming somewhat
synonymous with Intel, in large part due
to the IPCC program, a global effort that
now extends to 14 countries with parallel
optimization efforts currently underway on
close to 70 public domain codes.
The IPCC program offers a range of
support to software engineers including
funding, hardware grants, consulting and
training. Intel typically helps pay for the
work of computer scientists, often post-doc-toral and graduate students. A key advantage of the program: As public domain software is “modernized,” the improvements go
back to the community, so the benefits scale
“The IPCC program is beginning to
drive software improvements that the
industry desperately needs,” said Bob
Burroughs, Intel’s Director of Technical Computing Ecosystem Enablement.
“Between now and the end of the decade,
it’s critically important that we optimize a
significant amount of these legacy codes to
take advantage of massively parallel, manycore architectures. We’re moving toward
HPC systems having hundreds of thou-
The IPCC at Lawrence Berkeley National
Laboratory is performing code modernization work on NWChem.