19 Apr 2026

Scaling AI Clusters across multi-site deployments

Drivenets Stand: 725
Scaling AI Clusters across multi-site deployments
Large language models (LLMs), generative AI, and advanced analytics are pushing data center infrastructures to their limits. Enterprises in industries like automotive, finance and pharmaceuticals, as well as GPU-as-a-service (GPUaaS or neocloud) providers, are building their own AI infrastructure and scaling beyond a single data center to meet surging AI demand. This expansion addresses the growing computational demands, power and space limitations, resiliency needs, and data locality requirements of modern AI workloads.

While scaling AI workloads to multiple sites is often necessary, it creates a networking bottleneck. Unlike standard data traffic, AI and high-performance computing (HPC) workloads require low and predictable latency, high throughput, and lossless transport.

DriveNets AI Fabric is designed to address the challenges of cluster scale-out within the data center and scale-across distributed environments.

DriveNets AI Fabric offers a scheduled-fabric Ethernet solution that delivers industry-leading job completion time (JCT) performance without sacrificing the flexibility of standard Ethernet. The solution supports both shallow and deep-buffer Jericho3-AI switches as cluster leaves and Ramon3 switches as cluster spines.

By leveraging DriveNets AI Fabric, AI cluster builders can build distributed AI infrastructure with unprecedented interconnect efficiency and industry-leading performance, thereby freeing them from the power and space limitations of a single site deployment.
Loading