NVIDIA (NVDA) Co-Packaged Silicon Photonics Switches for Gigawatt AI Factories Webinar summary
Event summary combining transcript, slides, and related documents.
Co-Packaged Silicon Photonics Switches for Gigawatt AI Factories Webinar summary
16 Feb, 2026Keynote and infrastructure overview
Emphasized the shift from single-processor computing to data center-scale AI supercomputers, where network architecture defines performance.
Outlined four major infrastructures: NVLink Scale-Up, Spectrum-X Ethernet Scale-Out, BlueField DPU-based context memory storage, and Scale-Across for inter-data center connectivity.
Spectrum-X Ethernet designed specifically for AI workloads, eliminating jitter and synchronizing GPU operations for optimal distributed computing.
Highlighted the need for end-to-end infrastructure, focusing on RDMA and SuperNICs to control data flow and avoid network hotspots.
Achieved 1.4x performance and 3x expert dispatch improvement for AI workloads by eliminating jitter and ensuring predictable, synchronized GPU communication.
Co-packaged optics technology and deployment
Co-packaged optics move the optical engine into the switch package, reducing power consumption by up to 5x and increasing data center resiliency.
Innovations include micro-ring modulators, high-power lasers, and advanced packaging with TSMC for reliability and mass production.
Spectrum-X and Quantum-X switches with co-packaged optics support up to 409 Tb/s and 2,000 ports, enabling million-scale GPU AI factories.
Liquid cooling and dense switch designs further optimize power and performance for large-scale AI deployments.
Initial deployments with partners like CoreWeave, Lambda, and Texas Advanced Computing Center are set for this year, with broader rollouts in the second half.
Reliability, flexibility, and adoption considerations
Co-packaged optics improve reliability by eliminating human touch and external transceiver handling, validated through rigorous testing.
Technology supports both short and long-range connections within and across data center buildings, replacing a wide range of pluggable transceivers.
Concerns about flexibility and pay-as-you-go models are addressed by optimizing switch utilization in AI supercomputers, reducing both CapEx and OpEx.
Annual innovation cadence will drive larger radix switches, higher port densities, and further integration of liquid cooling and flexible rack designs.
Spectrum-X Ethernet supports multiple operating systems and customizable designs for diverse customer requirements.
Latest events from NVIDIA
- Compute-driven AI innovation is reshaping industries and fueling unprecedented growth.NVDA
Morgan Stanley Technology, Media & Telecom Conference 20264 Mar 2026 - Record Q4 and FY26 results driven by data center and networking growth; strong FY27 outlook.NVDA
Q4 202626 Feb 2026 - AI-driven digital twins and accelerated computing are set to revolutionize industrial design and manufacturing.NVDA
Fireside chat6 Feb 2026 - AI is transforming enterprises by enabling intent-driven innovation and technology-first strategies.NVDA
Cisco AI Summit 20264 Feb 2026 - Open models, agentic AI, and new hardware are driving exponential industry transformation.NVDA
CES 2026 Keynote3 Feb 2026 - Vera Rubin ramps fast amid strong demand, driving AI leadership and vertical innovation.NVDA
CES Financial Analyst Q&A3 Feb 2026 - AI's new industrial revolution is here—leaders must act now to drive transformation.NVDA
Fireside Chat3 Feb 2026 - AI factories, new supercomputers, and digital twins are reshaping global industry and enterprise.NVDA
COMPUTEX 2025 Keynote3 Feb 2026 - Record revenue, profit, and AI growth, with all board and shareholder proposals approved.NVDA
AGM 20243 Feb 2026