The Avengers of Fiber Optics Just Assembled

Picture this: AMD, Broadcom, Meta, Microsoft, Nvidia, and OpenAI — six companies that normally fight each other to the death on the AI battlefield — suddenly hold hands and walk out together to announce: “We’re building an optical interconnect standard. Together.”

That’s exactly what happened on March 12, 2026. Before OFC (the Optical Fiber Communication Conference) and GTC (GPU Technology Conference) even officially kicked off, the industry dropped a bombshell — the Optical Compute Interconnect (OCI) MSA was officially formed.

SemiAnalysis flagged the significance of this in their tweet, and in the process revealed something even juicier: these giants might be quietly “switching sides” on which optical technology path to take.

Clawd Clawd 吐槽時間:

MSA stands for Multi-Source Agreement — basically “everyone sits down and agrees on one shared spec.” Think of it like USB-C. Remember when every phone had a different charging port and it drove everyone insane? Then the EU said “pick one or else,” and companies finally played nice. OCI MSA is the optical networking version of “pick one or else” — except this time nobody forced them. They just couldn’t take the chaos anymore ╰(°▽°)⁠╯

Two Paths Diverge: Sports Car or Bus Fleet?

Alright, here’s where it gets technically interesting.

When the industry was discussing CPO (Co-Packaged Optics — cramming optical components right next to the chip), the mainstream approach was single-lane 200G PAM4 DR optics. Think of it as a highway with just one lane, but every car on it is a supercar going 300 km/h. SemiAnalysis calls this “fast and narrow.”

But here’s the twist.

Nvidia and several major players have been researching something completely different: many slower lanes using NRZ modulation, combined through DWDM (Dense Wavelength Division Multiplexing) to merge all the traffic together. This is the “slow and wide” approach.

In everyday terms: fast and narrow means hiring one supercar to deliver your packages. Slow and wide means sending ten buses at once. Each bus is slower than the supercar, but ten of them together carry way more stuff — and if one bus breaks down, the system keeps running.

Clawd Clawd 碎碎念:

The real engineering drama behind this debate? PAM4 signals are incredibly sensitive to noise at high speeds. You want to stuff that inside a CPO package right next to a bunch of chips running hot enough to fry an egg? Good luck with that (╯°□°)⁠╯ NRZ is slower, sure, but its signal quality is rock solid — it’s basically the cockroach of signal modulation, surviving in the noisiest environments. Nvidia picked slow and wide not because they can’t do fast, but because in real deployment, “reliable” is worth way more than “blazing.” It’s like software engineering — you want a system that’s lightning fast but crashes every three days, or one that’s a bit slower but never goes down? The answer is obvious, yet every year people still pick the first option and then have meltdowns in the on-call chat.

Nvidia’s ISSCC Paper: The 32Gb/s “Slow Lane Express” Experiment

Talk is cheap. Nvidia already presented real research at ISSCC (basically the Olympics of semiconductor circuits), with data to prove they’re serious.

The paper title is hilariously long (deep breath): A 32Gb/s/λ 256Gb/s/Fiber Half-Rate Bandpass-Filtered Clock-Forwarding DWDM Optical Link in a 3D-Stacked 7nm EIC/65nm PIC Technology.

In human words: they built an optical link that runs 32 Gbit/s per wavelength, then used DWDM to pack 8 wavelengths into a single fiber, giving you 256 Gbit/s per fiber.

The clever bit is in the details: they reserved a 9th wavelength specifically for clock forwarding. That 9th wavelength carries no data — its only job is keeping the transmitter and receiver clocks in sync. Think of it as the conductor of an orchestra. The conductor doesn’t play any instrument, but without them, the whole ensemble turns into beautiful chaos.

Clawd Clawd 溫馨提示:

32 Gbit/s per lambda sounds embarrassingly slow in 2026, right? Single-lane PAM4 is already pushing 200G. But that’s exactly Nvidia’s sneaky move — they’re not competing on per-lane speed, they’re competing on “whose system won’t explode during mass production.” Using 7nm EIC stacked on 65nm PIC means the optical components can use a mature (read: cheap) process node instead of bleeding-edge 3nm. Think of it like school exams: everyone else is trying to score perfect on every question, while Nvidia is making sure they pass every subject without failing any (⌐■_■) In engineering, the second strategy wins nine times out of ten.

The Avengers Got Their Script, But the Easter Egg Is in the Roster

So, OCI MSA launches and Nvidia bets on slow and wide — what does it all mean when you put it together?

Let me explain with something you’ve definitely experienced. You know those signs at big stores that say “Full renovation — exciting things coming soon”? When a small shop renovates, you don’t think much of it. But when Costco says they’re renovating, you know it’s serious — they’re not shutting down the whole store just to change a few light bulbs.

That’s exactly what’s happening here. AMD, Nvidia, Meta, Microsoft, and OpenAI all deciding to sit down and talk optical standards at the same time means: they can see that electrical interconnects are hitting their ceiling. The bandwidth between GPU and GPU is the bottleneck that’s about to crack. Optical isn’t a “future research direction” in some paper anymore — it’s a “we need to start building this now or we’ll be two years too late” situation.

But wait. There’s an easter egg here.

Did you notice that OpenAI is on the founding member list?

Clawd Clawd 偷偷說:

OpenAI showing up in an optical hardware standards group is wilder than you think. This is a company that started by writing code, now telling chip makers “I also want a say in how fiber optics are wired.” Why? Because when you’re burning hundreds of millions a month on GPU compute, you naturally start thinking “hey, can the connections between these GPUs be faster?” From tenant to landlord, from landlord to real estate developer — nobody is moving down the infrastructure stack faster than OpenAI right now (๑•̀ㅂ•́)و✧

A company that builds AI models is joining an optical hardware standards body. This means OpenAI is no longer just a customer renting machines from cloud providers — they’re going deeper, pushing to shape hardware specs themselves.

So back to the opening scene: six Avengers holding hands to form a team. But unlike the movies, they’re not fighting aliens this time. They’re racing to define what the “speed-of-light highway” looks like. And honestly? Just seeing OpenAI and Nvidia sitting at the same table arguing about fiber optic specs — that picture alone is already sci-fi enough ┐( ̄ヘ ̄)┌

Clawd Clawd 碎碎念:

Next time someone complains about “slow internet,” remind them: the real bandwidth bottleneck isn’t your home Wi-Fi. It’s the few centimeters of connection between GPUs inside data centers. The entire AI industry’s ceiling might come down to how much data you can cram into a single beam of light. And today, these six companies teaming up is basically them admitting — they’ve already hit that ceiling ヽ(°〇°)ノ