Startups & Business News
The networking landscape for artificial intelligence (AI) and high-performance computing (HPC) just saw a major shakeup. Broadcom has unveiled the Tomahawk Ultra, a switch chip offering ultra-low latency and record throughput—aimed squarely at dethroning Nvidia’s dominance in the AI data center interconnect game.
Tomahawk Ultra delivers an impressive 250-nanosecond switch latency at a 51.2 Tbps throughput. This means data zips across servers and accelerators at a pace tailored for demanding AI model training and large-scale inference workloads—setting a new bar for Ethernet in HPC circles.
But the innovation isn’t only about speed. Broadcom’s engineers have engineered out traditional Ethernet bottlenecks, optimizing the protocol for AI’s unique needs:
Lossless network fabric: Features like Link Layer Retry (LLR) and Credit-Based Flow Control (CBFC) virtually eliminate dropped packets and enable reliable, congestion-free transfers.
Efficient data movement: The switch reduces Ethernet header overhead from 46 bytes to just 10 bytes, boosting efficiency especially when transferring millions of small packets, which is vital for AI clusters.
In-network collective operations: Common AI compute tasks like AllReduce, Broadcast, and AllGather are performed within the switch hardware itself, minimizing CPU/GPU load and expediting distributed training jobs.
One of the most relevant aspects of Tomahawk Ultra is its commitment to an open Ethernet ecosystem. Whereas competitors like Nvidia use proprietary protocols such as NVLink (limiting cross-vendor interoperability), Broadcom harnesses a new breed of Scale-Up Ethernet (SUE) optimized for HPC and AI workloads.
This approach brings several advantages:
Ecosystem compatibility: Scale-Up Ethernet can tie together a vast array of processors—Broadcom claims support for at least 1,024 accelerators per fabric, well above the 576 GPUs per cluster supported by Nvidia’s latest NVLink Switch.
Seamless upgrades: The chip is pin-compatible with the previous-generation Tomahawk 5, allowing data centers to upgrade without re-architecting hardware infrastructure.
The latest Tomahawk Ultra is fabricated on TSMC’s advanced process technology, representing a multi-year engineering initiative involving hundreds of experts. According to Broadcom, this chip is already being shipped and deployed into rapidly-scaling AI training clusters and supercomputing environments.
Latency: 250 nanoseconds—a breakthrough for Ethernet in HPC.
Bandwidth: 51.2 Tbps, nearly double the throughput of Nvidia’s current NVLink switch offerings.
Packet handling: Up to 77 billion packets per second—even at minimum sizes—so AI networks can handle the demanding message rates thrown at them.
Advanced congestion control: Built-in mechanisms like forward error correction and flow control prevent packet loss, maintaining data integrity and performance under load.
With this release, Broadcom is signaling that Ethernet—long considered “good enough”—has now evolved into a formidable rival for proprietary AI networking protocols. Their approach doesn’t just challenge Nvidia’s networking stack; it also supports a broader move toward open, interoperable infrastructures in high-performance data centers.
For tech companies building the future of AI infrastructure, solutions like the Tomahawk Ultra are pushing the boundaries and redefining what’s possible for AI networking at scale.

Editorial Team
futureTEKnow is a leading source for Technology, Startups, and Business News, spotlighting the most innovative companies and breakthrough trends in emerging tech sectors like Artificial Intelligence (AI), Robotics, and the Space Industry.
Discover the companies and startups shaping tomorrow — explore the future of technology today.

Copenhagen-based Financial News Systems has raised €1.5M to build a fully AI-driven financial newsroom with no journalists in the loop.

Yuanjie Semiconductor’s photonic chips have gone from niche components to strategic assets in the AI data center race. This feature

Nvidia-backed Reflection AI is seeking a $2.5B round at a $25B valuation to build open-weight coding models as a U.S.

Pulsar Fusion’s Sunbird fusion rocket has achieved first plasma, validating its exhaust architecture and edging a reusable “space tug” concept

Aetherflux is betting that orbital data centers can power the next wave of AI, shifting from laser power beaming to

Harvey has raised $200M at an $11B valuation to scale more than 25,000 custom AI agents across law firms and

Mirage, the company behind the Captions app, has raised $75M from General Catalyst’s Customer Value Fund to build new AI

Amazon’s acquisition of Fauna Robotics brings the Sprout humanoid development platform into its Personal Robotics Group, highlighting a safety-first, developer-led

Interloom has raised $16.5M to build an enterprise memory layer that captures expert decisions and gives AI agents the context

Condor Software has raised $24M to expand an AI-powered financial intelligence platform for life sciences, connecting clinical, operational and financial

WhiteBridge AI has raised a $3M seed round to advance its AI-powered people search and research engine. The platform turns

Mind Robotics has raised a $500 million Series A to build an AI-driven industrial automation platform trained on Rivian’s production
futureTEKnow is focused on identifying and promoting creators, disruptors and innovators, and serving as a vital resource for those interested in the latest advancements in technology.
© 2026 All Rights Reserved.