Deep learning & HPC applications get even faster

27 March, 2018
Dylan Boday
IBM

Data is coming at us from every direction, and it’s up to data scientists and IT leaders to make sense of it by cleaning, processing and extracting actionable insights. It can be challenging enough to prepare your data pipeline, and once you complete this task, you don’t want hardware components to limit your ability to implement AI workloads.

Accelerated deep learning, machine learning and AI algorithms are facilitated where both CPU and GPU memory are in play. Usually, the more complex the algorithm and the larger the AI model, the more memory you need. Without the right memory footprint, you might have to limit your workloads to small data sets and low-resolution image classification. To drive an effective data science strategy, you need to tap into all of your data, including your large data sets, and both train and analyze full resolution images and videos.

Increased GPU memory positions data scientists and AI researchers to create larger models, enabling them to work with and get actionable insights from larger data sets. In short, more memory enables larger models, which eventually leads to better insights. For example, instead of analyzing compressed pixelated images, one can leverage high-resolution images with vivid detail and colors. Instead of analyzing postage stamp-sized video streams, one can apply image classification to 4k video.

Bigger models can also equate to larger data sets, which may yield unexpected discoveries. One can find interesting correlations in data by applying deep learning to data sets that might seem otherwise unrelated. Some of the most interesting prospects for AI revolve around getting answers to questions we didn’t know to ask – and that points to using bigger models with more data.

Today, we are announcing that IBM Power Systems will be adding the NVIDIA Tesla V100 32GB GPU into the POWER9-based IBM Power System AC922 server. The larger GPU memory supports larger data sets to fit in the GPU for acceleration. With the direct, high-speed NVIDIA NVLink connection between the IBM POWER9 CPU and the NVIDIA Tesla V100 GPU, we can deliver 5.6 times the data throughput compared to the PCIe Gen 3 interface found in compared x86-based servers[1].
We demonstrated this recently with both deep learning in PowerAI and machine learning in Snap ML using our large model support feature.

Visit the IBM Power Systems website to learn more about the best server for enterprise AI.

[1] 5.6x I/O bandwidth claim based on CUDA H2D Bandwidth Test conducted on a Xeon E5-2640 V4 +P100 vs Power9 + V100 (12 GB/s vs 68 GB/s rated)

The post Deep learning & HPC applications get even faster appeared first on IBM IT Infrastructure Blog.