AI Data Center Scale-Up Architecture with Optical I/O
Optical I/O boosts AI scale-up with higher bandwidth, increased power efficiency, and lower latency for enhanced AI performance.
Ayar Labs Optical I/O Meets the Demands of AI Scale-Up Infrastructure to Break Through the Limitations of AI Training and Inference
Large-scale AI training and inference workloads strain current computing infrastructure, leading to cost, power, and scalability challenges. Both scale-out (networking between clusters) and scale-up (communication within clusters) architectures are under increasing pressure to meet AI performance demands.
The solution for both the AI data center and in-house infrastructure is co-packaged optics (CPO), integrating photonics and electronics in one package to boost AI performance and dramatically cut energy consumption. Network switches with CPO have been announced to address scale-out needs, but scale-up is the larger challenge, requiring at least 10x the bandwidth and 10x latency reduction to overcome AI data center limitations.
Breaking through these AI performance barriers requires integrating CPO directly into the GPU package. Ayar Labs optical I/O overcomes bandwidth density, reach, and power limitations of electrical links, enabling scale-up architecture to enhance AI inference performance, interactivity, and profitability.


“Optical interconnects are needed to solve power density challenges in scale-up AI fabrics. We recognized early on the potential for co-packaged optics, which positioned us to drive adoption of optical solutions in AI applications. As we continue to push the boundaries of optical technologies, we’re also bringing together the supply chain, manufacturing, and testing and validation processes needed for customers to deploy these solutions at scale.”
Mark Wade
CEO and Co-Founder of Ayar Labs
See the AI System Architecture Tool in Action
Discover how Ayar Labs optical I/O solution drives the profitability and interactivity of large AI workloads with our updated AI System Architecture Tool. The tool simulates performance and economics across GPU and network configurations for scenarios including agentic AI and mixture of experts (MoE) models. Visit our booth at the following events to experience the tool in person and see how GPU and network architecture choices affect throughput, interactivity, and profitability.
- Supercomputing 2025: November 16-21, 2025
Understanding Scale-Up Architecture and Its Role in AI Infrastructure
Explore key concepts, technologies, and protocols behind scale-up, scale-out, and optical interconnects.

Blog

Blog

Blog

Blog

Blog

Video

Video

Press Release
Glossary of Terms Related to AI Scaling and Optical Interconnects
Glossary
Glossary
Glossary
Glossary
Glossary
Glossary
Learn More about Optical I/O for AI Infrastructure
Contact us at [email protected] to learn more.

