Data Center Network Architecture – Evolution, Current State, and Future

Abstract:

The dynamic development of services in ICT networks is inherently associated with the need to adapt network infrastructure to new and increasing demands. Data centers are a crucial element of the networks used to deliver these services. This article presents the evolution of data center structures, with a clear indication of the factors driving changes in their architecture as a response to the requirements of new network services. The paper analyzes the evolution of data center network architectures in the context of horizontal scaling, east-west traffic handling, and the requirements of AI clusters. It outlines the transition from traditional three-tier architectures to Clos topologies and lossless backend networks for GPU clusters. The scope of the study includes scalability, oversubscription, network virtualization, and the specific characteristics of rail-based topologies. The research methodology includes literature analysis, a review of engineering standards, and an evaluation of technical documentation for industry solutions. It is demonstrated that three-tier architectures have become insufficient due to limited scaling and inefficient handling of intra-data center traffic. Furthermore, it is concluded that AI workloads necessitate rail architectures, which shorten transmission paths and reduce congestion by co-locating the same types of GPUs under common switches. Finally, it is confirmed that the evolution of data center architectures is primarily driven by traffic characteristics, latency requirements, and the scale of parallel processing.