The AI revolution in HPC

26 October, 2017
Dave Turek
IBM

In a few months, when the HPC community gathers in Denver for SuperComputing 2017, I expect it will become clear that the supercomputing field is poised to take the next giant step in its evolutionary path. More clients and more companies will showcase the value of integrating artificial intelligence (AI) and HPC.  For decades, the HPC community has spoken longingly of efficiently steering simulations, improving the interpretation of complex model outputs and building more efficient and representative models of complex phenomena.  And now we are beginning to see these desires realized as researchers and commercial enterprises demonstrate the utility of melding AI with HPC in products and approaches across a broad spectrum of problems and industries.

Innovation in AI and HPC

IBM has been focused on merging AI and HPC for some time.  Our recently-announced PowerAI Vision is a natural adjunct to HPC simulations producing visual outputs. We announced Distributed Deep Learning, which exploits HPC architecture to achieve noteworthy learning performance, thereby speeding “time-to-insight,” and of course, in IBM Watson we have the capability to ingest huge amounts of data to help guide model development or solution interpretation.

We have also been running projects with company leaders across industries that demonstrate the use of machine learning to significantly reduce the amount of hardware required by efficiently managing ensembles of simulations. We have used cognitive methods coupled with HPC to showcase how the overall HPC workflow can be optimized for performance and improved insight.  We have been working diligently on all popular ML/DL frameworks to make them easier to deploy and use while also working to improve their overall performance on the IBM Power Systems platform.  Finally, the Power nodes running all of this innovative software have AI capability built in to the hardware with NVLink-attached NVIDIA GPUs.

IBM Power Systems bonds AI and HPC

Recently, National Computational Infrastructure (NCI) in Australia acquired numerous IBM Power Systems nodes to add to their existing HPC infrastructure, including the Raijin system.  For now, NCI researchers are utilizing these nodes because the extraordinary memory bandwidth by itself provides significant performance advantage for some of their applications. But, as familiarity with the technology increases, it is anticipated that the IBM Power Systems nodes will present the opportunity for those researchers to explore the intersection of AI and HPC across a wide range of scientific applications.  The point, of course, is that the IBM Power Systems design presents to clients a package of HPC and AI capability all rolled into one. ZDNet just published an article on the the performance advantages IBM Power Systems is delivering to NCI over x86 for data-intensive workloads.

Change is hard for many to accept immediately.   And the HPC community, for all its technological innovation over the years, is no different:  for many, the only thing that matters is the number of peak flops in a system (regardless of whether or not they can ever be used or even if they are helpful to any extent in optimizing a complex workflow). But change has a way of overcoming the inertia of the past as value of the new approach manifests.  The future of HPC will likely no longer be tied to the number of flops, but more closely to the insight that is generated.  And the revolution going on at this moment, where AI and HPC can come together to yield potentially dramatic enhanced value, may well be the catalyst to facilitate the change.  Find out how to get started with IBM Power Systems.

The post The AI revolution in HPC appeared first on IBM Systems Blog: In the Making.