Power > Systems Management > Performance

Is High-Performance Computing For You?


Do you have big data? It’s likely. Enterprises create 2.5 quintillion bytes of data each day, and 90 percent of the data in the world was created in the last two years (www.ibm.com/software/data/bigdata). It’s a daunting task for enterprises to determine the core value of this data—and it’s a major reason why most updated data-retention policies, if they exist, are conservative in their safekeeping practices. All of this causes us to store more “stuff” longer or even indefinitely.

“There is really no limit to this scalability.”
—Jim Herring, director of IBM HPC products

Business analytics is changing this—and so are an increasing number of intense data-crunching practices, such as complex portfolio analysis in the financial industry. Once performed almost exclusively by research laboratories and universities, these practices are now making their way into business. Taken together, big data trends are making a compelling case for the adoption of high-performance computing (HPC) in enterprise data centers.

The Case for HPC

HPC is the use of parallel processing that spreads machine instructions across multiple processors so they can run advanced applications efficiently, reliably and quickly. The applications include weather analytics and forecasting, scientific computations such as the analysis of genomes, oil and gas exploration, financial portfolios, medical and pharmaceuticals, and the breakdown of big data for use in business analytics.

Because HPC is unusually compute-intensive and is also relational and parallel, it can’t be achieved through the sequential transaction-centric computing characteristic of most data center servers. As enterprises recognize the need to incorporate HPC into their data center operations to support business analytics and, in some cases, business operations like pharmaceutical development and modeling, they are also recognizing they don’t have onsite HPC processing capability—or the statisticians and analysts who possess the degree of specialization needed to develop algorithms for HPC processing and results.

The good news for enterprises is lower-cost HPC options (other than supercomputers) are available for insertion into their data centers, and outside talent can help enterprises incorporate HPC.

“We see HPC as a solution for enterprise problems that require an intense compute process accompanied by the capability to rapidly process data,” says Meike Chabowski, product marketing manager for SUSE Linux* Products GmbH. “This could take the shape of HPC clusters that run traditional HPC workloads and that focus on number crunching, or HPC processing that supports specific business applications such as portfolio management, risk management and enterprise data management.”

These new HPC clustering technologies use open source and are based on either a UNIX*/Intel* or a Linux/Intel platform. Especially in the case of Linux/Intel processing clusters, the total cost of ownership (TCO) can be attractive to data center decision makers. HPC is highly scalable and delivers high performance and reliability. “Of the top 500 supercomputer users, more than 90 percent of these sites implement HPC on Linux,” Chabowski says.


comments powered by Disqus
Buyers Guide

Advertisement

Supercharge Your IT Environment

IBM compilers for AIX and Linux can significantly improve performance

Systems Management With No Budget

Use built-in features to find answers in the performance data

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App

Advertisement