Leading full stack of AI supercomputing solutions unveiled at GTC 2024.
To help meet the increasing demands for generative AI, ASUS uses the latest technologies from NVIDIA, including the B200 Tensor Core GPU, the GB200 Grace Blackwell Superchip, and the H200 NVL, to help deliver optimized AI server solutions to boost AI adoption across a wide range of industries.
SINGAPORE, Media OutReach Newswire, March 21, 2024: ASUS today announced its participation at the NVIDIA GTC global AI conference, where it will showcase its solutions at booth #730. On show will be the apex of ASUS GPU server innovation, ESC NM1-E1 and ESC NM2-E1, powered by the NVIDIA MGX modular reference architecture, accelerating AI supercomputing to new heights.
To help meet the increasing demands for generative AI, ASUS uses the latest technologies from NVIDIA, including the B200 Tensor Core GPU, the GB200 Grace Blackwell Superchip, and the H200 NVL, to help deliver optimized AI server solutions to boost AI adoption across a wide range of industries.
To better support enterprises in establishing their own generative AI environments, ASUS offers an extensive lineup of servers, from entry-level to high-end GPU server solutions, plus a comprehensive range of liquid-cooled rack solutions, to meet diverse workloads. Additionally, by leveraging its MLPerf expertise, the ASUS team is pursuing excellence by optimizing hardware and software for large-language model (LLM) training and inferencing and seamlessly integrating total AI solutions to meet the demanding landscape of AI supercomputing.
Tailored AI solutions with the all-new ASUS NVIDIA MGX-powered server
The latest ASUS NVIDIA MGX-powered 2U servers, ESC NM1-E1 and ESC NM2-E1, showcase the NVIDIA GH200 Grace Hopper Superchip, which offers high performance and efficiency. The NVIDIA Grace CPU includes Arm® Neoverse V9 CPU cores with Scalable Vector Extensions (SVE2) and is powered by NVIDIA NVLink-C2C technology. Integrating with NVIDIA BlueField-3 DPUs and ConnectX-7 network adapters, ASUS MGX-powered servers deliver a blazing data throughput of 400 GBb/s, ideal for enterprise AI development and deployment. Coupled with NVIDIA AI Enterprise, an end-to-end, cloud-native software platform for building and deploying enterprise-grade AI applications, the MGX-powered ESC NM1-E1 provides unparalleled flexibility and scalability for AI-driven data centers, HPCs, data analytics, and NVIDIA Omniverse applications.
Advanced liquid-cooling technology
The surge in AI applications has heightened the demand for advanced server-cooling technology. ASUS direct-to-chip (D2C) cooling offers a quick, simple option that distinguishes itself from the competition. D2C can be rapidly deployed, lowering data center power-usage effectiveness (PUE) ratios. ASUS servers, ESC N8-E11 and RS720QN-E11-RS24U, support manifolds and cool plates, enabling diverse cooling solutions. Additionally, ASUS servers accommodate a rear-door heat exchanger compliant with standard rack-server designs, eliminating the need to replace all racks—only the rear door is required to enable liquid cooling in the rack. By closely collaborating with industry-leading cooling solution providers, ASUS provides enterprise-grade, comprehensive cooling solutions and is committed to minimizing data center PUE, carbon emissions, and energy consumption to assist in the design and construction of greener data centers.
Confident AI software solutions
With its world-leading expertise in AI supercomputing, ASUS provides optimized server design and rack integration for data-intensive workloads. At GTC, ASUS will showcase ESC4000A-E12 to demonstrate a no-code AI platform with an integrated software stack, enabling businesses to accelerate AI development on LLM pre-training, fine-tuning, and inference—reducing risks and time-to-market without starting from scratch. Additionally, ASUS provides a comprehensive solution to support different LLM tokens, including 7B, 33B, and even 180B, with customized software, facilitating seamless server data dispatching. By optimizing the allocation of GPU resources for fine-tuned training, the software stack ensures that AI applications and workloads can run without wasting resources, which helps to maximize efficiency and return on investment (ROI). Furthermore, the software-hardware synergy delivered by ASUS provides businesses with the flexibility to choose the AI capabilities that best fit their needs, allowing them to push ROI even further.
This innovative software approach optimizes the allocation of dedicated GPU resources for AI training and inferencing, boosting system performance. The integrated software-hardware synergy caters to diverse AI training needs, empowering businesses of all sizes, including SMBs, to leverage advanced AI capabilities with ease and efficiency.
To address the evolving requirements of enterprise IoT applications, ASUS, renowned for its robust computing capabilities, is collaborating with industrial partners, software experts, and domain-focused integrators. These collaborations aim to offer turnkey server support for complete solutions, including full installation and testing for modern data- centers, AI, and HPC applications.
AVAILABILITY & PRICING
ASUS servers are available worldwide. Please visit https://servers.asus.com for more ASUS data-center solutions, or please contact your local ASUS representative for further information.
Also read: Women in the technology industry is constantly increasing, says Rajita Bhatnagar
Do Follow: CIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter
About us:
CIO News, a proprietary of Mercadeo, produces award-winning content and resources for IT leaders across any industry through print articles and recorded video interviews on topics in the technology sector such as Digital Transformation, Artificial Intelligence (AI), Machine Learning (ML), Cloud, Robotics, Cyber-security, Data, Analytics, SOC, SASE, among other technology topics.