ing that they are less likely to result in unforeseen side
effects due to interactions with other proteins.
In the past, it took three weeks to model a protein.
Using HPC resources, the company managed to reduce
that time to 12 seconds. And, in fact, from an end
user’s perspective, it takes only three seconds from
asking a question about a specific protein structure to
receiving the answer from the databases — which is
life-changing!
HPC enabled Moleculomics to take the lead over
their competition through their ability to compute
structures for all variants of proteins that are present
in the population. They are now beginning to run
screening experiments using algorithms to investigate
the interaction of these variant structures with candidate compounds for drugs.
Thinkplay.tv and Moleculomics are just two examples among many. In today’s world, high-performance
computing has become the competitive edge that can
help companies, big or small, to increase their productivity, reduce their time to market, save on development costs, and employ the foremost tools to compete.
Any small engineering and design company should
consider using HPC as part of their computer-aided
simulation activities. Today, most HPC systems are
built from commodity elements of servers, interconnect and storage, or, in other words, a compute and
storage cluster. These are not the complex proprietary
systems of the past. One can easily build an HPC
system by him or herself, and guidelines on how to do
so can be found, for example, on the HPC Advisory
Council Web site: www.hpcadvisorycouncil.com.
For instance, by buying dual CPU-socket servers from the server company of choice, adding the
required amount of storage to maintain files, and
connecting it all with InfiniBand interconnect, one can
gain the power of supercomputers. There are many
HPC sites that provide access to their systems, and the
HPC Advisory Council also provides access to various HPC systems in order to test and experience the
power of HPC first-hand. There are many applications
that the council has pre-tested and provided guidelines
on how to best use to form an efficient HPC system.
Using standard-based HPC systems is important
for any business to maintain, whether they are new to
HPC or experienced. Standard-based solutions ensure
backward and future compatibility, ensure support
from a large eco-system of vendors and users and,
of course, ensure the ability to use many of the open
source or open-source based software libraries and
applications.
Infographic
What’s Your Cooling Profile?
Cooling is a priority
More than 45% of survey
respondents who answered the
question were running at least
7Kw of equipment per rack.
80% of respondents agreed or
strongly agreed that reducing
cooling costs is one of their
highest priorities
People are relying on
conventional cooling techniques
51% of respondents used air
containment as part of their
data center cooling strategy
55% used CRAC or CRAH
units
Liquid cooling is underexplored
Less than 1% of respondents
used immersive or direct to
chip cooling
9% were prepared to consider
immersive cooling
technologies
45%
80%
51% 55%
<1%
9%
Regulating temperatures in the modern data center
can be a challenging task. In this data center
cooling survey, Data Center Knowledge set out to
find out how those responsible for managing data
center environments coped with cooling them. It
quizzed people with managerial roles in modern
data centers to determine their current practices
and attitudes toward data center cooling. The survey
explored current tools and techniques that they
used, and also looked at cutting-edge technologies to
evaluate their adoption.
View survey www.itwhitepapers.com/content37741