As AI workloads push beyond the physical and power constraints of traditional data centers, Broadcom’s launch of the new Jericho4 Ethernet fabric router marks a pivotal moment for the future of distributed AI infrastructure. This purpose-built chip targets a simple yet staggering goal: to seamlessly interconnect over one million accelerators (GPUs, TPUs, and XPUs) across many geographically dispersed facilities, ensuring lossless, low-latency, ultra-secure networking even over 100 kilometers.
Breaking Through the Single Data Center Ceiling
AI models are ballooning in size and complexity, making it impossible for a single data center to accommodate all the required power and physical space. Leading cloud providers and hyperscalers have already begun spreading their largest clusters over several sites, each drawing tens or even hundreds of megawatts. In this new era, the networking backbone is everything—which is where Jericho4 steps in as the ultimate AI superhighway, delivering a whopping 51.2 terabits per second (Tbps) of switching capacity.
Deep Buffering and Lossless RoCE for Distributed AI
One of Jericho4’s most notable innovations is its deep-buffered architecture. Traditional routers often drop packets when the network congests, but for distributed AI training, packet loss can bring processes to a crawl or even a halt. Jericho4 counters this with high-bandwidth memory and hardware-based congestion management: every packet is delivered, even across hundreds of kilometers. The chip natively supports RDMA over Converged Ethernet (RoCE), ensuring lossless, ultra-reliable, and low-latency data transport—a must for keeping far-flung AI clusters perfectly in sync.
HyperPort: Unleashing 3.2 Terabits Per Second per Port
The star feature here is the 3.2T HyperPort technology. By merging four 800GE Ethernet links into a single logical port, HyperPort eliminates traffic shuffling and inefficiencies, producing up to a 70% increase in network utilization. A single Jericho4 system can scale up to 36,000 HyperPorts, drastically simplifying network design and opening the door for true AI-scale fabrics at any distance.
Advanced Security at the Speed of AI
When data travels between different data centers, encryption and security can’t trade off with performance. Jericho4 supports full line-rate MACsec encryption on every port, managed through more than 200,000 security policies—all with no hit to network speed. Sensitive model data, in transit across entire metro regions, is locked down at every hop.
Energy Efficiency and Open Standards
Built on TSMC’s advanced 3-nanometer process and leveraging 200G PAM4 SerDes technology, Jericho4 slashes power usage by 40% per bit compared to previous generations, all while boosting reach and eliminating the need for signal retimers. This chip is also fully compliant with Ultra Ethernet Consortium (UEC) standards, making it drop-in ready for massive, multi-vendor AI environments—no vendor lock-in here.
The Battle: Jericho4 vs. NVIDIA
It’s impossible to ignore the elephant in the server room: NVIDIA has dominated AI networking so far, thanks to its InfiniBand and Spectrum-X products. Broadcom’s counter-offer is one of openness, scale, and silicon-level innovation—making Ethernet the open, scalable backbone for AI, and directly challenging proprietary, closed standards.
More Than a Chip: An Entire Ethernet AI Platform
Jericho4 is not a lone wolf but part of a portfolio that includes the Tomahawk 6 for inside-the-rack networking and Thor for AI-optimized NICs. This cohesive vision arms data center architects with every ingredient needed to span from a single rack to region-wide, mega-scale AI clusters.
In this rapidly evolving AI hardware arms race, Broadcom’s Jericho4 sits right at the bleeding edge—solving the connectivity challenges that will define the next generation of distributed, AI-driven innovation. The future of AI is no longer confined to the four walls of a single data center. With solutions like Jericho4, the network itself becomes the limitless AI platform.