Startups & Business News

nEye.ai’s $80M bet on optical circuit switching for AI data centers

How nEye.ai’s optical circuit switching for AI data centers aims to cut network power, boost GPU utilization, and attract hyperscalers after its $80M Series C.

When AI startups talk about “infrastructure,” they usually mean more GPUs, better cooling, maybe a power substation if things get really ambitious. nEye.ai is betting the real bottleneck sits somewhere more prosaic: in the cables and switches that try — and increasingly fail — to keep thousands of accelerators talking to each other efficiently. The Santa Clara–based startup just closed an $80 million Series C round to push its optical circuit switching chips into the heart of next‑generation AI data centers.

The round, led by Sutter Hill Ventures with participation from Alphabet’s CapitalG, Microsoft’s M12 and Socratic Partners, brings nEye.ai’s total funding to $152 million. That investor roster reads like a shortlist of hyperscaler adjacencies, and it’s a clear signal that the companies spending billions on AI clusters are running out of patience with incremental network upgrades.

What nEye.ai actually builds

nEye.ai describes itself as a pioneer in integrated optical interconnects, with a flagship optical circuit switch (OCS) built “on a chip.” Instead of shuffling traffic through layers of power‑hungry electronic switches, nEye’s hardware uses silicon photonics and MEMS (micro‑electro‑mechanical systems) to connect servers and accelerators with beams of light.

In practical terms, an OCS is a reconfigurable patch panel for light. nEye’s design integrates high‑radix optical switching into a compact, chip‑scale module that can establish direct optical paths among thousands of GPUs and memory nodes without repeated electronic conversions. The company claims this delivers ultra‑low latency, massive bandwidth and significantly lower power consumption compared to traditional leaf‑spine Ethernet fabrics.

The pitch is simple: today’s AI clusters waste a lot of power and time just moving bits around. As model sizes grow and jobs spill across racks and even buildings, those losses compound into real money. A switch that can cut network energy draw while improving utilization of very expensive accelerators starts to look less like a science project and more like a line item in a TCO model.

Why AI networks are breaking

Over the last two years, hyperscalers and cloud providers have spent heavily on GPUs, custom AI accelerators and liquid cooling — but networking has remained a stubborn bottleneck. Ethernet fabrics were designed for bursty, mixed workloads, not tightly synchronized parallel computing across tens of thousands of accelerators.

As clusters scale, operators run into three problems:

  • Growing congestion and tail latency that slow distributed training.

  • Rising power budgets for top‑of‑rack and spine switches.

  • Complex, rigid architectures that are hard to reconfigure as workloads change.

nEye.ai’s OCS approach targets all three by treating the network as a dynamically reconfigurable optical backplane rather than a static hierarchy of packet switches. Instead of sending every packet through multiple hops, the system can create direct, point‑to‑point optical circuits for high‑bandwidth AI jobs, then tear them down when the job finishes.

That model has been floating around research labs for more than a decade, particularly at places like UC Berkeley, where MEMS‑based silicon photonics for wafer‑scale optical circuit switches were developed to link GPUs and memory more efficiently.

The hard part has always been turning those prototypes into hardware that can survive a data center rack, and selling it to conservative operators who already have entire teams tuned to Ethernet.

The Series C: fuel for manufacturing, not just R&D

The new $80 million is explicitly earmarked for scaling up development and high‑volume manufacturing of nEye.ai’s optical circuit switches. That phrasing matters. It suggests the company believes the core technology is far enough along that the constraint is now fabs, packaging, testing and integration — all the messy, expensive parts that sit between a clever chip and a production‑ready system.

The fresh capital will go toward expanding manufacturing capacity and meeting performance requirements for deployment across next‑generation AI data centers, according to the company. That likely means tighter co‑design with hyperscalers’ internal networking teams, more rigorous reliability testing, and integration with the orchestration layers that schedule AI jobs.

Sutter Hill’s involvement is also notable. The firm has a track record of backing deeply technical infrastructure bets that take years to pay off but reshape markets when they do. Pair that with CapitalG and M12 — both with clear lines into major cloud businesses — and nEye.ai suddenly looks less like a speculative photonics experiment and more like a candidate for deployment in at least one of the big public clouds.

Business model: chips, systems or full fabrics?

nEye.ai is, at its core, a hardware company, but the form that takes commercially is still a live question. Public materials describe “OCS‑on‑a‑chip” products and “SuperSwitch” designs that roll high‑radix optical switching into compact modules for hyperscale data centers. That points to a few likely revenue paths:

  • Selling OCS chips or modules directly to switch OEMs and cloud providers.

  • Delivering complete optical switch systems into existing racks.

  • Licensing control software and network designs that orchestrate optical and electronic fabrics together.

Each comes with different unit economics and integration headaches. Chips and modules can, in theory, ride existing OEM channels but force nEye.ai to compete on performance and cost against incumbents.

Full systems give the startup more control and margin but demand more capital, more field engineering and a longer sales cycle. Software and reference architectures could be the connective tissue that makes either approach deployable at scale.

For now, the company is positioning its technology as a way to “reimagine data center connectivity” for hyperscale AI environments. That signals a target customer profile: operators who are already building dedicated AI fabrics and are willing to experiment with new topologies if it helps keep accelerator utilization high. Everyone else will watch the pilots and benchmark results before ripping out their leaf‑spine Ethernet.

The incumbents won’t stand still

Optical switching for data centers is not a white‑space market. Incumbent networking vendors have been layering more optics, higher‑speed SerDes and proprietary congestion controls into their roadmaps for years. Some have flirted with optical circuit switching and silicon photonics of their own, especially for inter‑data‑center links and specialized HPC workloads.

That sets up a familiar tension. A startup like nEye.ai can move faster and design from first principles around AI workloads, but it has to win space in racks already spoken for by well‑entrenched suppliers. Hyperscalers can dual‑source and experiment at the margins, yet they are reluctant to introduce new failure modes into critical AI clusters without a mountain of reliability data.

The presence of CapitalG and M12 suggests at least some large buyers are open to the idea that networking needs a more radical redesign for AI. The question is whether they roll this technology into their own internal designs, push OEMs to adopt it or encourage startups like nEye.ai to grow into full‑fledged system vendors. In each scenario, the bargaining power — and the margin profile — look very different for nEye’s eventual business.

Can optical circuit switching scale beyond pilots?

Scaling beyond a few high‑visibility pilots will require more than clever photonics. To become standard kit in AI data centers, nEye.ai has to clear several hurdles:

  • Demonstrate consistent reliability over years in production environments.

  • Integrate seamlessly with existing data center management and job schedulers.

  • Prove that energy savings and performance gains justify the capex and integration risk.

The company argues its OCS design delivers superior power efficiency and bandwidth, and that these gains map directly to the economics of AI training and inference at scale. If operators can run more jobs per GPU per watt, they can extract more revenue from fixed power and real estate budgets. That argument will be tested in P&L statements long before it shows up in conference keynotes.

There is also a regulatory and policy subtext here. As AI clusters draw more power — and face more scrutiny from local communities and grid operators — any technology that meaningfully reduces energy consumption per unit of compute will get a friendlier hearing from city officials and utilities. Optical circuit switching will not solve data centers’ land‑use or water‑cooling debates, but it can shave megawatts at the margins, which is increasingly where permitting battles are fought.

What to watch next

With $80 million in fresh capital and a cap table stacked with strategic investors, nEye.ai now has a narrow but important window to prove that optical circuit switching can become a first‑class citizen in AI infrastructure, not just a lab curiosity. Near‑term signals to watch include: early public deployments, reference designs with major clouds, and signs that networking power budgets are starting to matter as much as GPU availability in procurement decisions.

If nEye.ai can turn its OCS‑on‑a‑chip into a reliable, repeatable building block for hyperscale AI fabrics, it will force incumbents to respond — either through partnerships, acquisitions or their own accelerated photonics efforts. If not, optical circuit switching may remain one of those technologies everyone agrees is “the future of data centers,” right up until the moment they renew their Ethernet contracts for another generation.

Sofia Klein is a Staff Writer at futureTEKnow, writing about European AI ecosystems, safety research, and the regulatory frameworks shaping how AI gets deployed.

Sofia Klein is a Staff Writer at futureTEKnow, writing about European AI ecosystems, safety research, and the regulatory frameworks shaping how AI gets deployed.

Discover the companies and startups shaping tomorrow — explore the future of technology today.

Join Our Newsletter

* indicates required

Intuit Mailchimp

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending Companies

Latest Articles

futureTEKnow is focused on identifying and promoting creators, disruptors and innovators, and serving as a vital resource for those interested in the latest advancements in technology.

© 2026 All Rights Reserved.