training deep learning algorithms.
Supercomputing, in the form of
compute clusters consisting of
commodity-grade servers, is within the
reach of nearly every commercial and
academic organization. Performance
accelerators, in the form of GPUs
and other co-processors, are more
powerful and affordable than ever.
These factors, combined with the need
for automated ways to do things like
speech recognition, image recognition
and other tasks that are not well-suited
to traditional computational methods, are
driving the deep learning revolution.
As so often happens when interesting
ideas emerge, the software development
community has responded to the need
by introducing libraries and frameworks
designed to facilitate the development of
deep learning applications.
There are more than a dozen open
source frameworks for machine learning
today, each with its own advantages and
disadvantages. Some are better suited
to deep learning than others. Some, like
TensorFlow and Theano, are Python-based. Others, like Caffe are written
mostly in lower-level languages such as
C++. Some are more suited for running
on the Cloud, while others are designed
to run on local servers. But most have
one thing in common—they can be a
challenge to get up and running.
Getting Started with
• Providing a Solid Infrastructure
Deep learning applications rely on large
amounts of quality training data, and large
amounts of compute power. The most
practical way to house that data and apply
that compute power at scale is to build
Tremendous performance gains are
possible when you apply the power of
GPUs to your deep learning problems.
Having an infrastructure that can take
full advantage of GPUs is a plus.
• Choosing a Machine Learning
With so many frameworks and libraries
to choose from, there is a lot to consider
when making a selection. Which is best
for your situation?
• Putting it All Together
Once you’ve made a choice of
frameworks and libraries to use, another
task remains. You still need to find and
install all of the software dependencies
that are necessary to complete the
installation. Most of the open source
libraries rely on the availability of
supporting libraries from other projects.
Finding the right ones, and making sure
they’re compatible with the rest of your
cluster can be a challenge.
• Don’t Re-invent the Wheel
While you could spend your time
designing and building a cluster from
scratch, downloading and installing
operating systems, middleware, deep
learning frameworks and libraries, resolving
and installing all software dependencies
and hardware APIs, I don’t recommend it.
Rather than turning your data
scientists into IT administrators, you may
want to consider a deep learning solution
that is deployment-ready. Make sure the
solution you choose doesn’t limit you,
and is flexible enough to adapt in this
rapidly evolving field of deep learning.
Bright for Deep Learning is one such
solution. It provides the deep learning
software you need, along with all of the
supporting libraries and hardware APIs
into a solution that is ready to deploy. ●
many believed that
architectures was too
difficult an optimization
problem.” — Yoshua Bengio
and Yann LeChun.
The Deep Learning Revolution
Deep learning is the fastest-growing field in artificial intelligence, helping
computers make sense of infinite amounts of data in the form of images,
sound, and text.