The demand for digital infrastructure is soaring, but the pace of technological change is leaving many data centres struggling to keep up. Facilities built just a few years ago are already at risk of becoming outdated as artificial intelligence, high-density computing, and new cooling methods push traditional designs to their limits. To remain viable, operators must rethink how they approach investment, engineering, and long-term planning.
The Problem with Building for Yesterday
Most new data centre projects are still based on frameworks developed from past industry standards. While these practices worked for conventional server environments, they are less suited for the unpredictable demands of AI and advanced workloads. Traditional assumptions—such as stable IT loads, consistent airflow patterns, and predictable server densities—are being overturned by today’s computing landscape.
AI platforms, in particular, are transforming requirements. Training large models demands far higher density, more power, and specialized cooling. Yet when workloads shift to inference mode, demand drops sharply. This creates dynamic conditions that conventional designs were never built to handle. Without flexibility, operators risk frequent and costly retrofits to keep facilities functional.
Why Conventional Cooling Falls Short
For decades, the 19-inch cabinet design with front-to-back airflow and hot-aisle containment was a reliable standard. Air cooling alone could handle loads of up to about 15kW per rack, which was more than sufficient for most applications. But as AI accelerators push beyond 30kW, and in some cases toward 100kW per cabinet, simply pushing more air through the room is no longer practical. Wider aisles and more fans only reduce capacity and efficiency.
Liquid cooling, once considered a niche option, is quickly becoming essential. Direct-to-chip solutions promise to handle far greater heat loads, but the industry has yet to standardize around a single approach. Some manufacturers, like NVIDIA with its liquid-cooled GB200 systems, are leading the way, but broader adoption will take time. Until then, many operators remain cautious, reluctant to commit to designs that may not be compatible with future hardware.
Designing for Tomorrow’s Workloads
Building data centres that can adapt to change requires more than incremental upgrades—it calls for a mindset shift. Instead of relying on fixed standards, operators need to design with flexibility in mind, allowing for scalable containment, modular power distribution, and hybrid cooling solutions. This ensures that facilities can accommodate everything from conventional servers to next-generation AI processors without constant re-engineering.
Advanced modelling plays a critical role in this process. By simulating different hardware stacks, densities, and cooling strategies before construction begins, engineers can anticipate challenges and optimize layouts. This approach, long used in industries like energy and aerospace, reduces costly redesigns and ensures that new facilities can evolve as technologies advance.
Balancing Urgency with Longevity
The pressure to deliver new capacity quickly often leads to short-term decision-making. Yet the greater risk lies in building facilities that are outdated the moment they open. By prioritizing adaptability, operators can extend the lifespan of their investments, reduce carbon impact, and avoid disruptive retrofits.
The future of data centres depends on more than speed to market—it requires foresight. Those who invest in flexible, scalable, and resilient designs today will be the ones best prepared to meet the demands of tomorrow’s digital world.