AI Infrastructure


Enabling Exascale AI Training and Inference 

The Challenge 

Modern AI training clusters require 10,000+ GPUs operating in perfect synchronization. Interconnect bandwidth has become the primary bottleneck—limiting model size, training speed, and ultimately, AI capability. Current solutions can’t deliver the bandwidth density needed for next-generation foundation models without consuming unsustainable power. 

Our Solution 

Direct chip-to-chip optical interconnects eliminate electrical-optical conversion bottlenecks while maintaining the efficiency required for sustainable AI infrastructure scaling.

Key Benefits: 

  • Low cost, power DWDM Laser Sources 
  • Up to 5x Reduction in ELS power consumption  
  • Enables Wide and Slow dramatically reducing the SERDES power consumption

Technical Specifications 

  • Number of Wavelegths: >32 @ 200GHz 
  • Less than 2 W per module 
  • Supports Power Efficiency: 3-5 pJ/bit 
  • Scalability: Supports 10,000+ node clusters 
  • Standards: Directly compatible with current ELS interfaces 

Validation 

Field-tested with NVIDIA AI infrastructure teams.

Target Applications 

  • Large language model (LLM) training
  • Computer vision and generative AI workloads 
  • Multi-modal foundation model development
  • AI inference farms and serving infrastructure 
  • Distributed reinforcement learning