ASUS is stepping into the thermal crisis facing modern data centers with a direct challenge to air cooling's reign. On February 23, the company unveiled its Optimized Liquid-Cooling Solutions and Strategic Partner Framework—a comprehensive infrastructure play designed to handle the brutal thermal realities of next-generation AI and HPC systems. The timing is deliberate: ASUS will showcase the approach as a Diamond Sponsor at NVIDIA GTC 2026, where GPU thermal management is already a dominant conversation topic.
The numbers driving this announcement are unforgiving. NVIDIA's latest GB200 GPUs alone push 1,200W of thermal output per chip, and as data center operators pack more processors into each rack—densities now climbing from 16kW to well beyond 100kW—traditional air cooling has hit a wall. Physics doesn't negotiate. With hyperscalers and enterprises racing to deploy cutting-edge AI infrastructure, the ability to remove heat efficiently and reliably has shifted from a performance nice-to-have into a hard constraint on deployment capability.
ASUS's framework targets this inflection point head-on, betting that coordinated liquid-cooling solutions and supply chain partnerships will become essential infrastructure for any organization serious about next-generation AI workloads. The announcement signals an industry recognition that the thermal problem isn't temporary—it's structural, and solving it requires rethinking how systems are designed and cooled from the ground up.
The Heat Problem Modern AI Data Centers Can No Longer Ignore
The thermal wall has arrived. NVIDIA's GB200 GPUs already consume up to 1,200W per chip—a figure that pales against what's coming. The company's next-generation "Feynman"-class processors are projected to hit approximately 4,400W per chip by 2028, fundamentally breaking the cooling assumptions that have governed data center design for decades.
Traditional air cooling, which has served the industry adequately for years, maxes out at 10–20 kW per rack. Modern AI racks routinely exceed 30–100 kW. The math doesn't work anymore. Worse, air cooling devours 30–40% of total data center energy just to move heat around—a massive inefficiency compounded by typical Power Usage Effectiveness (PUE) ratios of 1.4–1.8, meaning data centers burn nearly as much energy on cooling and overhead as they do on actual computation.
Liquid cooling demolishes these constraints. It transfers heat approximately four times more effectively than air, enabling denser chip deployments and dramatically lower PUE ratios.
The urgency has reached Congress. In September 2025, the U.S. House of Representatives introduced H.R. 5332—the "Liquid Cooling for AI Act of 2025"—explicitly acknowledging that air cooling no longer suffices for next-generation AI infrastructure. The message is unmistakable: liquid cooling isn't optional; it's becoming mandatory.
Market data underscores the shift. The liquid cooling sector hit $4.8–$5.1 billion in 2025 and is projected to balloon to $27.1 billion by 2035, according to GM Insights. NetZero Insights reported that 84% of all 2025 cooling-related investment flowed toward liquid solutions—a near-total reallocation of capital.
The stakes extend beyond data centers. These facilities accounted for 4% of total U.S. electricity consumption in 2024 and are expected to more than double their demand by 2030, according to Pew Research Center analysis. Without aggressive adoption of liquid cooling, that trajectory becomes unsustainable—economically and environmentally. ASUS's announcement arrives not as innovation theater, but as infrastructure necessity.
Inside the ASUS Optimized Liquid-Cooling Solution Portfolio
ASUS has unveiled a comprehensive liquid-cooling framework designed to address thermal management across enterprise and hyperscale deployments. The portfolio spans three distinct architectures, each tailored to different infrastructure requirements and operational constraints.
Direct-to-Chip Cooling for Maximum Thermal Precision
Cold plates mounted directly on CPUs and GPUs extract heat at the component level, enabling precision thermal management without intermediary air loops. This approach supports processors up to 400W and GPUs exceeding 350W, unlocking rack densities of 30–80 kW in advanced configurations.
The efficiency gains are substantial: direct-to-chip cooling reduces fan power consumption by over 90% compared to traditional air cooling while cutting noise levels by 29.6%. Overall power consumption drops 20–25% versus air-cooled alternatives, translating to significant operational cost reductions. The solution deploys rapidly on existing infrastructure, making it ideal for retrofit scenarios where data center overhauls prove prohibitively expensive.
ASUS sources precision components from strategic partners Auras Technology and Cooler Master, ensuring reliability and performance consistency across deployments.
In-Row CDU-Based Cooling for Enterprise-Scale Density
A centralized Coolant Distribution Unit (CDU) manages coolant flow across server rows in a closed-loop system, currently rated for up to 100 kW per rack. ASUS's roadmap targets 200 kW per rack capacity, designed specifically for next-generation NVIDIA Vera Rubin NVL72 systems—positioning the solution ahead of emerging high-density GPU cluster demands.
This architecture suits enterprise and hyperscale operators managing large server deployments. Infrastructure partners Schneider Electric and Vertiv provide rack-level integration and power management capabilities, simplifying deployment logistics.
Hybrid Configurations That Bridge Legacy and Next-Gen Infrastructure
A hybrid approach combines chip-level liquid cooling with traditional air handling, allowing incremental upgrades without complete facility overhauls. This cost-efficient transition path proves critical for operators with existing infrastructure investments, enabling phased migrations rather than disruptive replacements.
Hybrid configurations provide maximum flexibility for mixed workload environments, where some applications benefit from liquid cooling while others remain air-cooled—a pragmatic approach for heterogeneous data centers navigating the cooling transition.
Together, these three solutions position ASUS as a leader in thermal innovation, offering operators granular control over cooling architecture maturity and deployment velocity.
The Strategic Partner Framework That Makes It All Work
ASUS's liquid-cooling announcement doesn't exist in a vacuum. Behind the engineering sits a carefully architected partner ecosystem designed to eliminate friction points that typically plague enterprise thermal deployments. By aligning infrastructure heavyweights, component specialists, and real-world validation partners, ASUS has created a framework that addresses cooling challenges at every system level—from the chip to the data center floor.
Infrastructure Heavyweights: Schneider Electric and Vertiv
Schneider Electric and Vertiv form the backbone of ASUS's infrastructure layer, bringing decades of expertise in enterprise power management and thermal systems to the table. Both are recognized global leaders in data center operations, a credential that matters when deploying mission-critical liquid-cooled systems at scale.
Schneider Electric contributes rack-level integration and power management capabilities that span from hyperscale deployments down to enterprise environments. Vertiv brings complementary strength in large-scale thermal management and facility-level integration, ensuring that cooling solutions don't operate in isolation but integrate seamlessly with existing data center infrastructure.
The strategic value here is tangible: ASUS solutions now carry end-to-end validation from chip-level cooling all the way through facility-wide infrastructure. This eliminates a traditional pain point—the gap between component vendors and infrastructure operators where compatibility issues fester. With Schneider Electric and Vertiv embedded in the framework, deployment complexity drops and interoperability improves measurably.
Component Specialists: Auras Technology and Cooler Master
Where infrastructure partners handle the big picture, Auras Technology and Cooler Master obsess over precision. Both supply the mission-critical cooling components—cold plates, manifolds, and coolant distribution units—that determine whether a liquid-cooled system performs or fails.
Auras Technology specializes in thermal stability and high-performance optimization for NVIDIA-compatible configurations, ensuring that cutting-edge accelerators maintain optimal thermals under sustained workloads. Cooler Master contributes advanced CDU (Coolant Distribution Unit) components and cold plate engineering purpose-built for server-grade deployments where reliability cannot be compromised.
Together, they guarantee component-level reliability and cross-system compatibility, reducing the risk calculus for enterprises considering liquid cooling.
Real-World Validation: The NCHC Nano4 Supercomputer Deployment
Theory becomes fact when systems go live. Taiwan's National Center for High-Performance Computing (NCHC) proved the framework works at scale when it deployed the Nano4 AI supercomputer on November 26, 2025—a dual-compute architecture pairing an NVIDIA HGX H200 cluster (81.55 PFLOPS) with Taiwan's first NVIDIA GB200 NVL72 system, running 36 Grace CPUs and 72 Blackwell GPUs per rack.
Built entirely with ASUS Advanced Direct Liquid Cooling (DLC), Nano4 achieved a PUE of 1.18—crushing typical air-cooled baselines of 1.4–1.8. The deployment itself validated ASUS's Infrastructure Deployment Capability (AIDC), completing full system setup in days. Nano4 now ranks #29 on the TOP500 global supercomputer list, operating as tangible proof that the partner framework delivers.
Performance Benchmarks and What They Mean for Your AI Workloads
ASUS's liquid-cooling announcement arrives backed by hardware credentials that matter in enterprise purchasing decisions. The company's track record across standardized benchmarks—combined with real-world deployments achieving exceptional energy efficiency—establishes a concrete foundation for claims about AI infrastructure performance and sustainability.
Record-Breaking SPEC CPU® and MLPerf™ Results
ASUS holds 2,156 No. 1 SPEC CPU® records across its server platforms, alongside 248 No. 1 MLPerf™ results for AI training and inference workloads. These aren't theoretical achievements; they represent validated, top-tier results spanning diverse server configurations and use cases.
The distinction matters: SPEC CPU® benchmarks measure general compute performance on CPU-intensive workloads, while MLPerf™ specifically tests AI-focused performance across training and inference scenarios. Together, they cover the workload spectrum that defines modern data centers.
ASUS has validated these platforms against next-generation NVIDIA Vera Rubin NVL72 configurations, ensuring benchmarks reflect real-world hardware partnerships rather than isolated lab results. For enterprises evaluating infrastructure vendors, this breadth of certified records underscores ASUS's credibility as both an enterprise and hyperscale AI infrastructure partner.
Achieving a PUE of 1.18 — What It Means in Practice
Power Usage Effectiveness (PUE) divides total facility power by IT equipment power. A perfect PUE of 1.0 is theoretical; typical air-cooled data centers operate at PUE 1.4–1.8, meaning 40 to 80 percent of energy overhead flows toward cooling and facility systems rather than compute.
ASUS achieved PUE of 1.18 at the NCHC Nano4 supercomputer in Taiwan using full Direct Liquid Cooling—as documented in the ASUS NCHC Nano4 case study—a substantial improvement that translates directly to measurable business impact: lower energy costs, reduced carbon emissions, and higher compute density without power constraints.
Consider a concrete example: a 10 MW data center operating at PUE 1.6 wastes 6 MW on overhead; the same facility at PUE 1.18 drops overhead to 1.8 MW. That 4.2 MW savings can be redirected entirely to compute deployment, fundamentally changing infrastructure economics at scale.
Scaling From 100 kW to 200 kW Per Rack on the Roadmap
Current in-row Coolant Distribution Unit (CDU) systems support up to 100 kW per rack—already five to ten times higher than traditional air cooling's 10–20 kW ceiling. ASUS's roadmap targets 200 kW per rack capacity, engineered for next-generation NVIDIA Vera Rubin NVL72 platforms and beyond.
Context matters: NVIDIA GB200 NVL72 racks currently hit 130–140 kW. Future Vera Rubin systems will push higher still. This roadmap trajectory positions ASUS solutions not merely for today's AI workloads but for the infrastructure generation ahead—full details via the official ASUS announcement.
The Future of AI Infrastructure Runs Liquid — And ASUS Is Positioning Early
The numbers tell an undeniable story: liquid cooling has moved from experimental to essential. The market is projected to balloon from $4.8 billion in 2025 to $27.1 billion by 2035, with Dell'Oro Group forecasting $7 billion by 2029 alone. More telling than any projection is the present reality—84% of all 2025 cooling investment already flowed toward liquid solutions, and 76% of new AI data centers are expected to adopt it as standard. The transition isn't coming. It's happening now.
ASUS's comprehensive framework announcement signals something deeper than a product launch: a fundamental repositioning as a total AI infrastructure solution provider rather than a component vendor. The three-tier approach—spanning direct-to-chip cooling, centralized distribution units, and hybrid configurations—paired with validated partners like Schneider Electric, Vertiv, Auras Technology, and Cooler Master, delivers what data center operators actually need: an end-to-end answer to the cooling problem that has become the primary constraint on AI deployment at scale.
The credibility is already earned. ASUS's NCHC Nano4 supercomputer ranking #29 on the TOP500 while achieving a PUE of 1.18 proves these aren't theoretical specifications—they're production-validated systems running real workloads.
For those who've watched ASUS build precision into consumer displays for years, the implication is clear: the same engineering rigor now shapes the infrastructure powering AI-generated content. The company isn't just selling cooling hardware. It's building the backbone of the next computational era.
ASUS's presence at NVIDIA GTC 2026 (March 16–19, San Jose) as a Diamond Sponsor at Booth #421 under the banner "Trusted AI, Total Flexibility" represents a company betting decisively that the future of AI infrastructure is liquid-cooled, modular, and strategically integrated. For enterprises still treating liquid cooling as optional, that moment of optionality has quietly passed.




