Wednesday, February 19, 2025

Artificial Intelligence is driving an infrastructure revolution

Must read

It’s already clear that the AI revolution will need new network architectures, new networking technologies and a new approach to infrastructure cabling design, emphasising new product innovation and faster installation.

The CIO, through to the Data Center Manager, will need to guarantee that their infrastructure can support future AI needs.

It’s already clear that the AI revolution will need new network architectures, new networking technologies and a new approach to infrastructure cabling design, emphasising new product innovation and faster installation.

AI needs access to more capacity at higher speeds, and those needs will only grow more acutely. Whether the AI cloud is on-premise or off-premise, the industry must be ready to meet those needs.

As recently as 2017, many conversations with cloud data centre operators revolved around data rates (think 100G) that today would be considered “limited.” At that time, the optics supply chains were either immature, or the technology was proving too expensive to go beyond that rate.

Until then, the Internet was rich in media content—photographs, movies, podcasts, music, and new business applications. However, data storage and transmission capabilities were still relatively limited. Well, they are limited to what we see today.

It’s estimated that in 2017, 1.8 million Snaps on Snapchat were created every minute; by 2023, that figure is reported to have increased by 194,344%, or 3.5 billion Snapchats every minute.

We also now see IT technology that can interrogate all the 1s and 0s used to make those images and sounds and, in the blink of an eye, answer a complex query, make actionable decisions, detect fraud, or even interpret patterns that may necessitate future social and economic change at a national level. These previously human responsibilities can now be achieved instantly using AI.

Both on-prem and off-prem AI cloud infrastructure must expand to support the vast amount of data generated by the new payload overhead created by AI adoption for these functions.  

CommScope has been working to provide infrastructure solutions in iterative and generative AI (GenAI) for years, supporting many global players in the cloud and internet industry.     

For some time, we’ve taken an innovative approach to infrastructure that sets sights firmly on what’s coming over the horizon beyond the short term. We build solutions not only to solve coming challenges but also those challenges customers don’t even see coming.

A good example of this thinking is new connectivity.  We thought long and hard about how the networking industry will respond to the demand for higher data rates and how the electrical paths and silicon inside the next generation of switches will likely shape the future of optical connectivity.  The genesis of these conversations was the MPO16 optical fibre connector, which CommScope was among the first to bring to market in an end-to-end structured cabling solution.  

This connector ensures that the current IEEE roadmap of higher data rates can be satisfied, including at 400G, 800G and 1.6T, all essential technologies for the AI cloud.

We’ve also developed solutions that are quick to install, an advantage as highly prized as the connector technology itself. Pulling high-fiber-count factory-terminated cable assemblies through a conduit can significantly reduce build time for AI cloud deployments while ensuring factory-level optical performance over multiple channels. CommScope supplies the industry with assemblies that provide 1,728 fibres and all pre-terminated onto MPO connectors in our controlled factory environment. AI cloud providers can quickly connect multiple front-end and back-end switches and servers.

To that point, we see an AI cloud arms race, not just at the big players but also for those who might have previously been labelled as “tier 2” or “tier 3” cloud companies just a short while ago. These companies measure their success by rapidly building and spinning AI cloud infrastructures to provide GPU access to their customers and, just as importantly, beating competitors off the starting line.

The (quickly approaching) future

In the new world of the AI cloud, all data must be read and re-read; it’s not just the latest batch of new data to land at the server that must be prioritised. To achieve payback on a trained model, all data (old and new) must be kept in a constant state of high accessibility so that it can be quickly served up for training and retraining.  

This means that GPU servers require nearly instantaneous direct access to all the other GPU-enabled servers on the network to work efficiently. The old method of approaching network design, one of which is “build now and think about extending later,” won’t work in the world of AI cloud.  Today’s architectures must be built with the future in mind, i.e., the parallel processing of vast amounts of often diverse data. Designing a network built to support the access needs of a GPU server demands first will ensure the best payback for the sunk CapEx costs and ongoing OpEx required to power these devices.

In a short time, AI quickly took the cloud data centre from the “propeller era” and rocketed it into a new hypersonic jet age. I think we’re going to need a different aeroplane.

CommScope can help you better understand and navigate the AI landscape. Start by downloading our new guide, Data Center Cabling Solutions for NVIDIA AI Networks.

Latest article