Friday, November 22, 2024

Supermicro Expands Manufacturing Footprint by Increasing Global Rack-Scale Manufacturing Capacity to 5,000 Fully Tested Rack Solutions for AI, HPC and Liquid Cooling Per Month

Supercomputing Conference Supermicro, Inc. , manufacturer of total IT solutions for AI, Cloud, Storage and 5G/Edge, is expanding its AI and HPC rack delivery capacity and advanced liquid cooling solutions. Globally, Supermicro’s full rack-scale delivery capacity is growing from several state-of-the-art integration facilities in the United States, Taiwan , the Netherlands and Malaysia. Future manufacturing expansions and locations are actively being considered to meet increasing demand for Supermicro’s portfolio of rack-scale AI and HPC solutions.

“With our global presence, we can now supply 5,000 racks per month to support large orders for fully integrated, liquid-cooled racks, requiring up to 100kW per rack,” said Charles Liang , president and CEO of Supermicro . “We expect that up to 20% of new data centers will adopt liquid cooling solutions as CPUs and GPUs heat up. With the development of AI technologies, our industry-leading rack-scale solutions are in high demand in an increasing proportion of data centers worldwide .Rack-scale and liquid cooling solutions should be planned early in the design and implementation process, resulting in faster delivery times to meet the urgent implementation requirements for AI and hyperscale data centers.”

Also Read : Orange Business and VMware Transform Flexible SD-WAN to Simplify Customer Experience With Digitalization and Automation

Supermicro maintains an extensive inventory of “Golden SKUs” to meet fast delivery times for a global rollout. Large CSPs and Enterprise Data Centers running next-generation AI applications will quickly benefit from shorter delivery times worldwide. Supermicro’s broad range of servers from data center to the edge (IoT) integrate seamlessly, resulting in increased adoption and more engaged customers.

With the  recent announcement  of the MGX product line, featuring the NVIDIA GH200 Grace   Hopper  Superchip and the NVIDIA Grace   CPU Superchip, Supermicro continues to expand AI-optimized servers into the industry. Combined with its existing product line of the LLM-optimized NVIDIA HGX 8-GPU solutions and the NVIDIA L40S and L4 offerings, along with Intel Data Center MAX GPUs , Intel®  Gaudi® 2 and the AMD Instinct   MI series GPUs, Supermicro can full range of AI training and AI inferencing applications. The Supermicro All-Flash storage servers with NVMe E1.S and E3.S storage systems accelerate data access for various AI training applications, resulting in faster execution times. For HPC applications, the Supermicro SuperBlade, with GPUs, reduces execution time for high-end simulations with lower power consumption.

When liquid cooling is integrated into a data center, it can reduce a data center’s PUE by up to 50% compared to existing industry averages. Reducing the power footprint and resulting lower PUE in a data center significantly reduces operational expenses when running generative AI or HPC simulations.

With Supermicro’s rack-scale integration and deployment services, customers can start with proven reference designs for rapid installation. The customer’s unique business objectives are taken into account. Customers can then work with Supermicro-qualified experts to design optimized solutions for specific workloads. Upon delivery, the racks only need to be connected to power, networking and liquid cooling infrastructure, underscoring the seamless plug-and-play method. Supermicro is committed to providing complete data center IT solutions, including on-site delivery, deployment, integration and benchmarking, to maximize operational efficiency.

SOURCE : PRNewswire

Subscribe Now

    Hot Topics