Optical Interconnection Solution for Cloud Data Centers

Cloud data centers are the infrastructure of cloud computing. Cloud computing business continues to penetrate, driving the construction of cloud computing super-large data centers. Cloud IT infrastructure is mainly composed of switches, servers, and optical cables and optical modules (or AOC) that interconnect all of them. The large-scale cloud data center will greatly increase the usage of optical modules, and at the same time have higher requirements for the transmission distance of optical modules, which will increase the usage ratio of single-mode optical modules.

 

The explosive growth of cloud data center traffic has driven the operation rate of optical modules to continuously upgrade and accelerate. It took 5 years for 10G rate ports to iterate to 40G rate ports, 4 years for 40G rate ports to upgrade to 100G rate ports, and 3 years for 100G rate ports to 400G rate ports. In the future, all export data in data centers will need to go through massive internal calculations (especially the rise of applications such as AI, with even more amazing internal and export traffic). Today, data centers generally adopt flat architecture so that the 100G optical module market will continue to grow at a high speed.

 

According to Synergy third-party reports, as of 2016, the total number of ultra-large data centers in the world will exceed 300, compared with 390 in 2017. At present, at least 69 ultra-large data centers are in the planning or construction stage. It is predicted that by the end of 2019, the number of ultra-large data centers in the world will exceed 500, when 83% public cloud servers and 86% public cloud loads will be carried in the super data centers, the proportion of super data center server deployment will rise from 21% to 47%, the proportion of processing power will rise from 39% to 68%, and the proportion of traffic will rise from 34% to 53%.

 

数据中心传统三层架构和扁平化两层架构的对比

Comparison of Traditional Three-Layer Architecture and Flat Two-Layer Architecture in Data Center

 

Application examples of optical modules and AOC in data center optical interconnection

The network architecture of a cloud data center in China is divided into Spine Core, Edge Core, and Leaf Switch.

  • From the NIC of the server to the switch in the access switching area, 10G SFP + AOC active optical cable is used for interconnection.

  • From the access switching area switch to the core area switch of the module, 40GE QSFP + SR4 optical module and MPO fiber jumper are used for interconnection.

  • From the module core switch to the super core switch, 100G QSFP28 CWDM4 optical module and LC dual-fiber fiber jumper are used for interconnection.

  •  

The characteristics of cloud data center optical module requirements

Due to the traffic growth rate, network architecture, reliability requirements, and differences between the computer room environment and carrier-level networks, cloud data centers have the following characteristics: short iteration cycle, high speed requirements, high density, and low power consumption.

In response to several major demands, Xinjiexun provides a very comprehensive solution for cloud data center optical modules, with a speed of 10G to 100G and a distance of 100m to 40km.

  • The iteration cycle is short. Data center traffic is growing at a high speed, driving optical modules are constantly upgrading and accelerating. The decline cycle of data center hardware equipment, including optical modules, is about 3 years, while the iteration cycle of telecom-grade optical modules is generally more than 6 to 7 years.

  • High rate requirements. Due to the explosive growth of data center traffic, the technology iteration of optical modules cannot keep up with the demand, and basically the most cutting-edge technology is applied to the data center. For higher-speed optical modules, the demand for data centers has always been there, and the key is whether the technology is mature.

  • high density. The core of high density is to increase the single-board transmission capacity of switches and servers, and essentially to meet the needs of high-speed traffic growth; at the same time, the higher the density, it means that fewer switches can be deployed to save computer room resources.

  • low power consumption. The data center consumes a lot of power. On the one hand, the low power consumption is to save energy consumption, and on the other hand, it is to deal with the heat dissipation problem. Because the backplane of the data center switch is full of optical modules, if the heat dissipation problem cannot be properly solved, it will affect The improvement of the performance and density of the optical module.