Saturday, November 2, 2024

Supermicro Accelerates the Era of AI and the Metaverse with Top-of-the-Line Servers for AI Training, Deep Learning, HPC, and Generative AI, Featuring NVIDIA HGX and PCIe-Based H100 8-GPU Systems

Supermicro, Inc, a Total IT Solution Provider for AI/ML, Cloud, Storage, and 5G/Edge, has announced that it has begun shipping its top-of-the-line new GPU servers that feature the latest NVIDIA HGX H100 8-GPU system. Supermicro servers incorporate the new NVIDIA L4 Tensor Core GPU in a wide range of application-optimized servers from the edge to the data center.

Supermicro offers the most comprehensive portfolio of GPU systems in the industry, including servers in 8U, 6U, 5U, 4U, 2U, and 1U form factors, as well as workstations and SuperBlade systems that support the full range of new NVIDIA H100 GPUs,” said Charles Liang, president and CEO of Supermicro. “With our new NVIDIA HGX H100 Delta-Next server, customers can expect 9x performance gains compared to the previous generation for AI training applications. Our GPU servers have innovative airflow designs which reduce fan speeds, lower noise levels, and consume less power, resulting in a reduced total cost of ownership (TCO). In addition, we deliver complete rack-scale liquid-cooling options for customers looking to further future-proof their data centers.”

Also Read: Menlo Micro Releases to Production a New High Performance, Fully Integrated, Configurable RF Switching Solution

Supermicro’s most powerful new 8U GPU server is now shipping in volume. Optimized for AI, DL, ML, and HPC workloads, this new Supermicro 8U server is powered by the NVIDIA HGX H100 8-GPU featuring the highest GPU-to-GPU communication using the fastest NVIDIA NVLink 4.0 technology, NVSwitch interconnects, and NVIDIA Quantum-2 InfiniBand and Spectrum-4 Ethernet networking to break through the barriers of AI at scale.

In addition, Supermicro offers several performance-optimized configurations of GPU servers, including direct-connect/single-root/dual-root CPUs to GPUs and front or rear I/O models with AC and DC power in standard and OCP DC rack configurations. The Supermicro X13 SuperBlade enclosure accommodates 20 NVIDIA H100 Tensor Core PCIe GPUs or 40 NVIDIA L40 GPUs in an 8U enclosure. In addition, up to 10 NVIDIA H100 PCIe GPUs or 20 NVIDIA L4 Tensor Core GPUs can be used in a 6U enclosure. These new systems deliver the optimized acceleration ideal for running NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform.

SOURCE: PR Newswire

Subscribe Now

    Hot Topics