Trends in the Data Center Industry

Distributed, decentralized architectures, densification, hybridization and virtualization, and sustainability will continue to drive data center developments in the coming years. Data centers will need to invest to keep up with bandwidth, speed, and latency demands, especially with the rollout of 5G.

Migration to 400 G requires higher density and more cabling in racks. Where 40 and 100 G require eight fibers in parallel pairs, new higher speeds require 16 or 32 pairs, which further boosts cable density.

Changing DC infrastructure – higher density and rack architecture

As more and more connections are required in Hyperscale, Enterprise, Edge, and other types of Data Center, we’re seeing the demand for smaller form-factor products and smart rack solutions continue for the foreseeable future. High-density patch cables with reduced diameter and patch panels are needed to save rack space, reduce the overall footprint of installations, and reduce downtime risk due to improper handling – without compromising on performance. As these can place considerable strain on racks, dedicated HD racks will also be sought after. Different types of racks (edge, telco cabinet, core, spine, network cabinet…), have strongly different needs. These are diversifying further as connections to the core network/server backbone structure change, and new network branching concepts are introduced. Data Centers require ever-greater freedom to choose between End of Row (EoR), Middle of Row (MoR), and Top of Rack (ToR) connections in order to optimize performance and design flexibility, while realizing significant savings.

As cables are more difficult to grip and manipulate in densely packed racks, it becomes harder to see what you’re doing. Pre-term installation cables and cable systems significantly reduce handling and installation time and guarantee functionality. More and more connectivity push-pull variants are coming onto the market, making connection easier and reducing risk. Demand for innovative fiber connector types will also continue. Very small form factor (VSFF) duplex fiber connectors (e.g. SN, MDC) with 1.25mm ceramic ferrules are gaining ground.  In Hyperscale DCs, compact high fiber count ribbon cables are increasingly used. This saves space and simplifies cable management and moves, adds, and changes (MACs), while maximizing port counts and fiber counts in new or existing pathways, and offers solutions for backbone and horizontal DC cabling.

Preconfigured cabinets fitted with power, cooling, security, and connectivity that allow infrastructure elements to communicate offer a neat solution for many applications. This also applies to a modular DC approach, using an integrated, predesigned set of modules that form a fully functional, scalable data center from the outset.

Asset management: Vital at every level

Dynamic data center environments increasingly require ongoing, precise, and efficient asset management. As DCs grow more complex, provide more functions, and demand greater flexibility, up-to-date, accurate knowledge of available infrastructure is a must. For compliance purposes the ability to demonstrate the lifecycle of DC assets, such as switches or servers, is also essential. For active equipment, monitoring is often well organized. However, on the passive side, this is far more challenging. Often, a lack of insight leads to prolonged repair procedures.

Digitalization and integration with network record software are becoming more important.

Once you’ve implemented a system that helps you keep track of automation, workflow management, patching, and so on you can plan ahead more easily.

(Hyperscale) data centers that accommodate hundreds of thousands of fiber connections, insensitive operating environments, can no longer be managed in a traditional way. They have to be monitored automatically to be able to guarantee operational reliability, preferably supporting technical management, compliance, and economic management.

When it comes to MAC issues, human errors and poor record-keeping are the main causes. The more you automate, the easier, faster, and more fault-tolerant your work becomes. If you can optimize MAC processes and records, you can improve availability, uptime, and Time to Capability, speed up repairs and expand in a modular way.

At all times, you will want to know for sure how fast you can switch on new services or functionalities, and that it will work as intended right from the start. Installation managers also need to be absolutely sure each port is connected exactly how they think it is to prevent security issues. DCIM and inventory management are vital to ensuring this.

Edge DCs often end up in areas where fewer trained support staff are available. Furthermore, smaller DCs might not have dedicated managers for facilities, IT, or infrastructure. Automated workflow can guide technical staff through processes and make MACs easier.

 

 

Migration from 100G to 400-800G

Uptake of 400G and 800G in data centers is being driven by demand from consumers and professional users alike. Working and learning at home as a result of Covid-19 are here to stay and will keep growing, as will HD streaming and gaming, professional media file sharing, fintech, AI, online retail, IoT and IIoT, data analytics, AI, and machine learning. At the same time, data centers need to control costs. This may lead to reduction or aggregation of switches, cooling, and power utilization, especially in Cloud DCs.

400G and 800G make this aggregation relatively easy and cost-effective, breaking out 400G switch ports to eight 50G servers or to four 100G servers, or 800G switch ports to eight 100G or four 200G servers. We expect enterprise DCs to migrate to 25G or 50G for servers and 100G or 400G for uplinks. For cloud data centers server speeds may soon reach 50G or 100G with 200G or 400G uplinks. Today’s MPO connectivity and singlemode / multimode fiber, for example, Base8 MPO OM4 and Base8 MPO singlemode can be utilized to facilitate migration to 400G and 800G.

IEEE has released standards for 400G over singlemode and multimode fiber. Transceivers for 400GBASE-DR4 using eight singlemode fibers to reach 500m are available on the market today. The IEEE Beyond 400Gb/s Ethernet Study group is currently defining physical layer specifications to support 800G, with objectives including support over eight lanes of multimode to 100m and eight lanes of singlemode to 500m. The IEEE Beyond 400Gb/s Ethernet Study group is currently also looking into physical layer specifications for 1.6 Terabit.

Rollout speed will be increasingly important

From the end customer point of view, DC performance is measured based on uptime, spending, efficiency, and Time to Capability. Ten years ago, it might have been acceptable to develop a new business support application over the course of nine months. However, in today’s cloud-based environment, you might have a new service up and running in nine minutes. So, a business today expects an application to be ready in a matter of weeks, instead of months. However, that requires ordering servers, storage, load balancers, and network gear in plenty of time.

Enterprise: Cloud or on-premise?

According to *Gartner Group, 85% of infrastructure strategies will integrate cloud, on-premise, colocation, and edge delivery options by 2025. In 2020 this was 20%. That means it will be vital to determine whether each individual workload should be managed in-house or elsewhere, and which hardware, software, and technical support are required. The main issue is the fact that infrastructure needs to be scalable and agile.

More information: https://www.rdm.com/solutions/data-center/

*Source: Gartner