This site may earn affiliate commissions from the links on this page. Terms of use.

Deep learning, cocky-driving cars, and AI are all huge topics these days, with companies like Nvidia, IBM, AMD, and Intel all throwing their hats into the band. At present Cray, which helped pioneer the very concept of a supercomputer, is as well bringing its own solutions to market.

Cray announced a pair of new systems: the Cray CS-Tempest 500GT, and the CS-Storm 500NX. Both are designed to piece of work with Nvidia'due south Pascal-based Tesla GPUs, simply they offer dissimilar characteristic sets and capabilities. the CS-Storm 500GT supports up to 8x 450W or 10x 400W accelerators, including Nvidia's Tesla P40 or P100 GPU accelerators. Add-in boards like Intel'due south Knights Landing and FPGAs congenital past Nallatech are besides supported in this arrangement, which uses PCI Limited for its peripheral interconnect. The 500GT platform uses Intel'southward Skylake Xeon processors.

The Cray CS-Tempest 500GT supports up to 10 P40 or P100 GPUs and taps Nvidia'southward NVLink connector rather than PCI Express. Xeon Phi and Nallatech devices aren't listed as being compatible with this system compages. Full specs on each are listed below:

CSComp

The CS-Storm 500NX uses NVLink, which is why Cray can list it every bit supporting upwardly to eight P100 SMX2 GPUs, without having 8th PCIe 3.0 slots (but in case that was unclear).

"Client need for AI-capable infrastructure is growing chop-chop, and the introduction of our new CS-Storm systems will requite our customers a powerful solution for tackling a broad range of deep learning and motorcar learning workloads at scale with the power of a Cray supercomputer," said Fred Kohout, Cray'southward senior vice president of products and chief marketing officer. "The exponential growth of data sizes, coupled with the need for faster time-to-solutions in AI, dictates the need for a highly-scalable and tuned infrastructure."

NVLinkFabric

Nvidia's NVLink fabric tin can be used to attach GPUs without using PCI Express.

The surge in self-driving cars, AI, and deep learning technology could be a huge boon to companies like Cray, which once dominated the supercomputing industry. Cray went from an early leader in the space to a shadow of its former self after a string of acquisitions and unsuccessful products in the late 1990s and early on 2000s. From 2004 forwards the company has enjoyed more success, with multiple high-profile blueprint wins using AMD, Intel, and Nvidia hardware.

And so far, Nvidia has emerged as the overall leader in HPC workload accelerators. Of the 86 systems listed equally using an accelerator at the TOP500 list, 60 of them use Fermi, Kepler, or Pascal (Kepler is the clear winner, with 50 designs). The side by side-closest hybrid is Intel, which has 21 Xeon Phi wins.

AMD has made plans to enter these markets with deep learning accelerators based on its Polaris and Vega architectures, but those chips oasis't actually launched in-market still. By all accounts, these are the killer growth markets for the industry as a whole, and they assist explain why even some game developers like Blizzard desire to make it on the AI craze. As compute resource shift towards Amazon, Microsoft, and other cloud service providers, the companies that can provide the hardware these workloads run on will be best positioned for the futurity. Smartphones and tablets didn't really work for Nvidia or Intel–making AMD's decision to stay out of those markets retrospectively await very, very wise–but both are positioned well to capitalize on these new dense server trends. AMD is obviously playing catch-up on the CPU and GPU front, but Ryzen should deliver strong server functioning when Naples launches later this quarter.