codecraftedweb
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
  • News
  • Artist
  • Rock
  • Metal
  • Culture
  • Fashion
No Result
View All Result
codecraftedweb
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
  • News
  • Artist
  • Rock
  • Metal
  • Culture
  • Fashion
No Result
View All Result
codecraftedweb
No Result
View All Result

Modernizing Legacy Data Centres for the AI Revolution

admin by admin
09/16/2025
in Digital Transformation
0
Modernizing Legacy Data Centres for the AI Revolution
399
SHARES
2.3k
VIEWS
Share on FacebookShare on Twitter

As artificial intelligence (AI) workloads become more demanding, legacy data centres—once built to handle far lighter computing loads—are quickly showing their age. Older data centres, which were designed for smaller workloads and simpler computing tasks, now face significant challenges when tasked with handling the energy-hungry, high-performance needs of AI. It’s clear that these facilities require substantial updates to support the intensive requirements of modern technology.

The Challenge of AI Workloads on Legacy Systems

In the past, data centres were designed to handle racks of servers running at relatively low power (around 5-10 kW). Today, AI-driven applications demand far more from infrastructure, with some racks now pushing hundreds of kilowatts. This shift makes the traditional designs of legacy data centres obsolete. To stay relevant, these facilities must undergo significant retrofitting to accommodate the next wave of high-powered workloads.

Cooling Upgrades: From Air to Liquid

One of the most notable areas where legacy data centres fall short is cooling. Traditional air-based cooling systems were sufficient for older, less power-hungry servers. However, with the introduction of advanced AI servers and GPUs, these systems are increasingly ineffective and inefficient. Liquid cooling solutions, such as direct-to-chip cooling, have become the new standard. While overhauling cooling systems entirely can be expensive, introducing liquid cooling incrementally can significantly improve efficiency and reduce energy consumption.

Retrofitting for the AI Age

Rather than completely replacing legacy infrastructure—which would be prohibitively expensive—many data centre operators are choosing to retrofit their existing facilities. By upgrading rather than rebuilding, operators can extend the life of older plants and make them capable of supporting AI workloads. This approach offers a balance between cost and performance, avoiding the huge capital expenses that come with starting from scratch.

Retrofitting often involves reinforcing the physical infrastructure of a building. Older data centres may not have been built to support the weight of the heavy AI racks now required. To make these upgrades feasible, floors can be reinforced, and cabinets expanded to accommodate newer, larger servers. These adjustments allow older facilities to handle the advanced, energy-intensive equipment needed for AI computing.

Addressing New Technology Demands

According to Jordi Sinfreu, Head of Data Centres for Southern Europe at JLL, higher-density racks demand not only stronger physical structures but also different cooling solutions. As AI, edge computing, and hyperscale systems require more power and generate more heat, traditional air cooling systems can no longer keep up. Liquid cooling methods are better equipped to handle these needs, and data centre designs must evolve to integrate these newer technologies effectively.

Training staff is also crucial in this transition. Many legacy data centres were never designed with AI in mind, and operators may need additional education on how to manage and maintain upgraded systems. Legacy monitoring systems, for instance, may lack the sophistication required to track GPU utilization, latency, and other key performance indicators essential for AI operations. Updated systems with greater visibility can help identify potential bottlenecks and inefficiencies, making it easier to optimize performance.

Environmental Benefits of Retrofitting

While retrofitting presents technical challenges, it also offers significant environmental advantages. Instead of demolishing and rebuilding entire data centres—an extremely carbon-intensive process—retrofitting can reduce waste and energy consumption. With data centre energy use and carbon emissions coming under increasing regulatory scrutiny, updating existing facilities rather than constructing new ones can help companies reduce their overall carbon footprint.

Retrofitting also opens up opportunities to adopt newer, more sustainable technologies. For example, immersion cooling systems, which use liquid to cool servers rather than air, can dramatically improve energy efficiency. These systems are a prime example of how retrofitting can make legacy data centres more energy-efficient while reducing operational costs in the long run.

Preparing for the Future of AI

While fully transforming legacy data centres may not always be feasible, targeted retrofitting can make a significant difference. By focusing on the areas that will have the greatest impact—such as improving cooling systems, modernizing physical storage, and upgrading energy management systems—older data centres can be made fit for the AI-driven future. These retrofits not only help companies stay competitive but also contribute to sustainability goals by reducing the carbon footprint of existing infrastructure.

In the end, retrofitting legacy data centres is about more than just keeping up with AI demands. It’s about adapting older systems to meet the needs of a rapidly evolving technological landscape while reducing environmental impact. For many organizations, this strategy offers the best way forward.

Recent

What Is a Cloud Server and How Does It Work?

What Is a Cloud Server and How Does It Work?

09/16/2025
Backups vs. Snapshots: Key Differences and How to Use Them Effectively

Backups vs. Snapshots: Key Differences and How to Use Them Effectively

09/16/2025
RTO vs. RPO: Key Differences in Disaster Recovery Planning

RTO vs. RPO: Key Differences in Disaster Recovery Planning

09/16/2025

Categories

  • Artificial Intelligence (65)
  • Artist (14)
  • Blockchain Technology (46)
  • Cloud Computing (62)
  • Culture (9)
  • Data Science (62)
  • Digital Transformation (39)
  • Fashion (9)
  • Lifestyle (11)
  • Metal (9)
  • News (15)
  • Robotics (27)
  • Rock (11)
  • Videos (10)

Category

  • Artificial Intelligence
  • Artist
  • Blockchain Technology
  • Cloud Computing
  • Culture
  • Data Science
  • Digital Transformation
  • Fashion
  • Lifestyle
  • Metal
  • News
  • Robotics
  • Rock
  • Videos

Advertise

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis. Learn more

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

Follow Us

Recent News

What Is a Cloud Server and How Does It Work?

What Is a Cloud Server and How Does It Work?

09/16/2025
Backups vs. Snapshots: Key Differences and How to Use Them Effectively

Backups vs. Snapshots: Key Differences and How to Use Them Effectively

09/16/2025

MusikMagz is demo site of JNews - All-in-one News, Blog & Magazine WordPress Theme.
© 2017 JNews - Premium WordPress news & magazine theme by Jegtheme.

No Result
View All Result
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
  • News
  • Artist
  • Rock
  • Metal
  • Culture
  • Fashion

© 2025 JNews - Premium WordPress news & magazine theme by Jegtheme.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?