How High-Performance Computing Delivers Insight
High-performance computing delivers insight by melding cognitive approaches, classic simulation and modeling—Illustration by Craig Ward
Some people think of high- performance computing (HPC) in terms of huge, interconnected systems rapidly crunching numbers—most often under the auspices of government agency oversight. But industries ranging from financial institutions to manufacturing are also using them.
Plus, HPC systems are now being infused with cognitive-capable insight, using IBM Watson* to make sense of all of the raw data being produced. They’re no longer utilized only for their data processing speeds but also for their capability to understand and explain complex problems, as David Turek, vice president, HPC Market Engagement, IBM, and James Sexton, IBM Fellow and director, Data Centric Systems, explain here.
IBM Systems Magazine (ISM): Could you provide an overview of what HPC systems are?
David Turek (DT): There isn’t a precise definition, in part because various people in the industry have been loose with language over the years and because the notion of HPC has changed over time. If we go back 20 years, people would say, “If your system is using its floating point, it’s therefore an HPC system.” But that’s an insufficient declaration of HPC because the floating point is becoming progressively less important than some of the other architectural features of the system. Memory bandwidth and integer performance play a near equal role or, in some cases, a dominant role.
“It takes a lot of expertise to understand raw data. If you can turn that into something a manager or a designer can use quickly and effectively, that’s where you move from analytics to cognitive.” —James Sexton, IBM Fellow and director, Data Centric Systems
From an application-centered perspective, HPC is focusing on solving problems in science and engineering that are fundamentally based on the exploitation of mathematical approaches to the formulation of problems that invoke a lot of architectural capabilities. This includes risk management, finance and other domains of quasi-commercial deployment.
Really big systems at, for example, Lawrence Livermore National Laboratory, are manifestly HPC systems because we know what these people are doing. But the bulk of HPC is done with much smaller institutions. Typically, those smaller institutions are taking a subset of the technology you see at scale and deploying that locally to solve their problems. If Livermore has a 100-cluster system that consists of several thousand nodes, you can take one cluster out and it would provide a subset of the performance Livermore would see. It would be quite adequate for a department at a university or a small industrial organization.
James Sexton (JS): In the really big systems at national research labs, the focus is scientific research, and tight coupling of compute elements is critical. Financial institutions use HPC in a different way to model risk. They need a slightly different type of computer. It doesn’t need to be so tightly connected, but it still needs a lot of compute elements in it. Oil and gas companies use it for understanding and modeling their reservoirs and understanding their fuel extraction process from a reservoir. Technical companies—like Boeing—use it for better aircraft design. Car companies use it to do virtual simulations of what happens during car crashes to validate the safety of the car frame. Chemical companies use it for materials modeling and process design.
comments powered by