Future Trends and Challenges: Where Are Servers Heading?
As the wave of artificial intelligence sweeps across the globe, demand for AI computing power is surging at an astonishing annual growth rate of 300%, driving unprecedented transformation and challenges for server technology.
In today's era of rapid AI advancement, servers—as the core carriers of computing power—are undergoing revolutionary transformation. From hardware architecture to thermal management, from power supply systems to ecosystem development, the server industry is simultaneously addressing exponentially growing computational demands while redefining its future trajectory.
Explosive Computing Demand Marks a Turning Point for the Server Industry
The rapid advancement of AI technology has catalysed an explosive surge in computing demand. From GPT-1 to GPT-5, model parameter counts have soared from 117 million to trillions, while model architectures have evolved from dense LLM models towards sparse MoE models and multimodal frameworks.
This growth is directly reflected in global capital expenditure on data centre infrastructure. According to a Huaxin Securities research report, global data centre infrastructure capital expenditure will reach $3-4 trillion over the next five years, indicating vast prospects for the cloud infrastructure market.
Oracle's latest financial report further corroborates this trend, with its cloud infrastructure revenue projected to surge by 77%. Unrecognised performance obligations have soared to $455 billion, including a $300 billion contract solely with OpenAI.
Three Technological Trends Reshaping the Future of Servers
Thermal Management Innovation: Liquid Cooling Emerges as the Inevitable Choice
As AI chip computing power rapidly advances, thermal management has become the foremost challenge in server design. Current mainstream AI chips already achieve thermal flux densities of 150 watts per square centimetre – fifteen times that of a standard electric kettle – with projections exceeding 200 watts per square centimetre in the near future.
Air cooling solutions are nearing their limits, making liquid cooling the standard configuration for AI servers. Direct-to-Chip (DLC) systems now dominate the market, accounting for 80% of deployments.
To meet heightened thermal demands, cooling technology is evolving from single-phase to two-phase liquid cooling. Yang Chaobin, Huawei Director and CEO of the ICT Business Group, unequivocally stated: ‘Liquid-cooled data centres are becoming the inevitable choice for AI-driven data centres.’
Power Supply System Upgrades: Embracing the Megawatt-Scale Rack Era
Server power systems are undergoing revolutionary upgrades. Traditional 48V systems can no longer meet high-performance computing demands, driving the industry towards 400V DC systems while eyeing future 800VDC or even 1500VDC technologies.
Most notably, the 1-megawatt (MW) rack is poised to become reality. Flextronics President Chris Butler predicts: ‘I believe this will become widespread within the next year to eighteen months.’
This power level represents a monumental leap—1 megawatt equates to 1,000 kilowatts, marking a tenfold increase over the typical 15-kilowatt racks currently prevalent in data centres.
Storage Technology Upgrades: Addressing the Data Deluge Challenge
AI servers impose heightened demands on storage systems. The enterprise storage market is projected to reach approximately US$87.8 billion by 2025, with a compound annual growth rate (CAGR) of around 18.7% from 2024 to 2028.
The value of memory within individual AI servers has significantly increased. Taking Intel Sapphire Rapids servers as an example, DRAM costs $3,930 and NAND costs $1,532; whereas H100 AI servers incur DRAM costs of $7,860 and NAND costs of $3,490.
Solid-state drives (SSDs) are advancing towards PCIe 5.0, while data centre SSDs are evolving into cutting-edge domains such as QLC and SCM. The rising penetration of DDR5 and the introduction of MRDIMM further underscore the upgrading trajectory of enterprise-grade storage under the AI trend.
Domestic Server Ecosystem Accelerates Maturity
Amidst rapid international server technology advancements, China's domestic AI industrial chain competitiveness continues to strengthen.
Cambricon's private placement was approved, aiming to raise RMB 3.985 billion for large-model chip and software platform R&D. Its first-half revenue surged 4,347% year-on-year, achieving profitability. Shenghong Technology also secured approval for a RMB 1.9 billion private placement to expand overseas AI-related production capacity.
Differences exist between China and the US in AIDC development pathways. Regarding power density, internationally leading projects have generally achieved 120 to 150kW per cabinet, with NVIDIA planning to advance to 600kW by 2028. Domestically, solutions increasingly rely on optical network packaging and clustering approaches, with current mainstream levels at 40-60kW per cabinet.
Challenges and Opportunities in AIDC Standardisation
With surging AI computing demands, AIDC (AI data centre) construction faces unprecedented challenges. Professor Jin Hai, Chairman of the Global Computing Alliance, highlights the ‘triple test’ of thermal management, power supply and spatial constraints, alongside the ‘AI waiting for data centres’ dilemma caused by standard gaps and prolonged construction cycles.
The industry is accelerating AIDC standardisation, with the AIDC Infrastructure Specifications pre-released. Key technical standards for liquid cooling and power supply are under development, while policy measures supporting direct green electricity supply have been introduced.
Modular construction and rapid deployment are emerging as key technological trends, projected to reduce data centre delivery cycles from 6-8 months to 3 months, thereby accelerating the large-scale adoption of liquid cooling technology.
Core Challenges in Future Server Development
Despite promising prospects, server technology development confronts multiple challenges:
Thermal efficiency bottlenecks: As chip power density continues to increase, sustained innovation in thermal management is essential to ensure stable system operation.
Energy consumption concerns: High-density server stacking imposes significant energy demands, necessitating more efficient power supply systems and cooling solutions for data centres.
Lack of standardisation: Inconsistent standards across the supply chain, particularly the absence of unified specifications for liquid cooling technology, often results in equipment remaining idle for over six months post-delivery before deployment.
Insufficient Ecosystem Development: Collaborative advancement across the entire supply chain—encompassing chips, servers, and data centre infrastructure—is essential for driving technological innovation and standardisation.
Conclusion: Future Trajectories for Server Development
Confronting technical challenges and escalating computational demands, servers will evolve along three primary pathways:
Green Efficiency: Innovations such as liquid cooling and high-voltage direct current (HVDC) power supply will substantially enhance energy utilisation efficiency and reduce Power Usage Effectiveness (PUE) values.
Standardisation and Ecosystem Development: Establishing unified standards and specifications through industry collaboration to enable rapid deployment and scalable application.
Domestic Production and Autonomous Control: China's server industry will leverage its inherent strengths to develop technology pathways and solutions with distinctive Chinese characteristics.
Miao Fuyou, CTO of the Global Computing Alliance Secretariat, forecasts that domestic AIDC construction will maintain an annual growth rate exceeding 40% over the next two to three years. Subsequent annual additions will continue to increase before gradually levelling off. By around 2030, the annual growth rate is projected to stabilise at approximately 10%.
As the computational foundation of the digital economy, servers are undergoing profound transformation amid the AI wave. Only by seizing technological trends and addressing industry challenges can one secure a competitive edge in the era of intelligent computing.