Huawei is pioneering a transformative approach to artificial intelligence (AI) infrastructure that promises to revolutionize how AI systems are built, scaled, and operated globally. At HUAWEI CONNECT 2025, the company unveiled the SuperPoD architecture, a breakthrough technology designed to interconnect thousands of AI chips across multiple server cabinets, making them function as a single, cohesive computational unit.
The Vision: A Unified AI Computing Powerhouse
Traditional AI deployments generally consist of distributed servers operating independently or loosely coordinated through networking protocols. Huawei’s SuperPoD disrupts this by creating a “single logical machine” from thousands of individual processors, capable of “learning, thinking, and reasoning as one,” as per the company’s executives. This unified approach addresses inherent bottlenecks in scaling AI workloads and designing highly efficient computing architectures.
The Technical Backbone: UnifiedBus 2.0
The core enabling technology behind SuperPoD is the UnifiedBus (UB) 2.0 interconnect protocol. According to Yang Chaobin, Huawei’s Director and CEO of the ICT Business Group, this protocol integrates physical servers at a deep hardware and software level, overcoming the long-standing challenges of bandwidth, latency, and communication reliability at scale.
- High-Reliability Connectivity: Unlike conventional copper connections limited to short-range high bandwidth and optical cables prone to faults over long distances, UB 2.0 introduces multi-layer reliability into every OSI framework layer—from the physical up to the network layers. This includes a fault detection and switching mechanism with nearly 100-nanosecond response, making transient optical disconnections transparent to applications.
- Bandwidth and Latency Improvements: The protocol delivers ultra-high bandwidth and remarkably low latency, critical for synchronizing thousands of AI chips seamlessly.
Eric Xu, Huawei’s Deputy Chairman, emphasized the importance of this approach to ensuring dependable, scalable AI infrastructure amid challenges in semiconductor manufacturing advancements, particularly in China. This approach enables practical, sustainable AI computational power without reliance on the most advanced process nodes.
SuperPoD Architecture: Massive Scale and Performance
The flagship implementation of this architecture is the Atlas 950 SuperPoD, which accommodates up to 8,192 Ascend 950DT AI chips spread over 160 cabinets within a 1,000 square meter footprint.
- Computational Power: It delivers roughly 8 exa floating-point operations per second (EFLOPS) in FP8 precision and 16 EFLOPS in FP4 precision.
- Interconnect Bandwidth: At 16 petabytes per second (PB/s), this bandwidth exceeds global peak internet bandwidth by over 10 times.
- Memory Capacity and Latency: The system hosts 1,152 terabytes (TB) of memory with a record-low 2.1-microsecond end-to-end latency.
Building upon this, the Atlas 960 SuperPoD is planned to scale further with 15,488 Ascend 960 chips occupying 220 cabinets across 2,200 square meters. It is projected to yield 30 EFLOPS (FP8), 60 EFLOPS (FP4), 4,460 TB memory, and 34 PB/s interconnect bandwidth, establishing new benchmarks in AI performance.
Expanding Beyond AI: TaiShan 950 SuperPoD for General Computing
Huawei is extending the SuperPoD concept to general-purpose computing through the TaiShan 950 SuperPoD, driven by Kunpeng 950 processors. This system aims to provide alternatives to legacy mainframes and mid-range servers, a critical need in sectors like finance where computing demands are growing exponentially.
For instance, TaiShan 950 SuperPoD combined with Huawei’s distributed GaussDB database targets replacing traditional mainframes and proprietary database solutions, creating more flexible and scalable infrastructure options.
Open Architecture and Ecosystem Building
A key strategic move by Huawei is releasing the UnifiedBus 2.0 specifications as public, open standards, inviting collaboration and innovation across the industry.
- Open Hardware: This includes neural processing unit (NPU) modules, air- and liquid-cooled blade servers, AI cards, CPU boards, and cascade cards.
- Open Software: Huawei plans to open-source critical software tools, including CANN compiler tools, Mind series application kits, and the OpenPangu foundation models by the end of 2025.
This open approach aims to foster a vibrant ecosystem of technology partners developing specialized SuperPoD solutions tailored to various industrial scenarios, accelerating both innovation and adoption at scale.
Market Validation and Global Implications
In 2025 alone, over 300 Atlas 900 A3 SuperPoD units have been deployed to more than 20 customers across sectors like Internet services, finance, telecommunications, energy, and manufacturing—confirming practical viability and effectiveness.
The SuperPoD infrastructure strategy responds to China’s semiconductor manufacturing limitations by focusing on architecture and ecosystem innovation rather than solely on chip process nodes. This allows domestic technology growth within current manufacturing constraints.
Globally, Huawei’s open, modular SuperPoD offers an alternative to the proprietary AI infrastructure ecosystems prevalent among Western technology companies. While performance parity and commercial viability at scale remain to be validated, this approach could shift competitive dynamics in AI infrastructure worldwide.
Key Takeaways
- Huawei’s SuperPoD architecture unifies thousands of AI chips into one logical machine, enhancing scalability and efficiency.
- UnifiedBus 2.0 protocol provides ultra-high reliability, bandwidth, and low latency interconnect crucial for large-scale AI computing.
- The Atlas 950 and 960 SuperPoD set new benchmarks with tens of thousands of chips, multi-petabyte interconnect bandwidth, and terabytes of memory.
- Open-sourcing hardware and software encourages ecosystem development and innovation beyond proprietary models.
- The architecture has applications beyond AI, targeting legacy mainframe replacement in enterprise sectors through the TaiShan 950 SuperPoD.
Conclusion
Huawei’s SuperPoD represents a paradigm shift in AI infrastructure architecture, focusing on deep integration, unprecedented scale, and an open ecosystem approach. By enabling thousands of AI chips to operate as one cohesive entity, this technology paves the way for more powerful, efficient, and scalable AI systems.
As AI demands continue to surge across industries, innovations like Huawei’s SuperPoD and UnifiedBus 2.0 will be critical in overcoming current hardware limitations and unlocking the next frontier of AI computing capabilities. The success of this open ecosystem model could significantly influence the future landscape of AI hardware infrastructure on a global scale.