The rapid growth of artificial intelligence (AI) workloads is driving a transformation in data centre design requiring specialized infrastructure that poses new challenges for supply chain management, procurement strategies, and construction economics.
AI-capable data centres are fundamentally different from traditional facilities, demanding high-performance computing (HPC) hardware, greater power density, and advanced thermal management systems.
These requirements push the limits of conventional design and demand new approaches to power distribution, cooling and structural flexibility. There are inherent impacts and risk to program, cost and management processes which need to be considered as well.
Some of the differences and considerations are outlined below:
The AI boom is putting additional strain on long lead equipment (LLE) supply chains as data center construction expands at an unprecedented pace. While most attention is on availability and supply of chips and servers, the surge in AI workloads also drives heavy demand for standard infrastructure components.
From a supply chain standpoint, there is increased pressure on sourcing high-demand components such as GPUs, chillers, switchgears and custom AI accelerators, and high-bandwidth networking gear. These specialized components, already classified as LLE due to extended manufacturing and delivery timelines, often have limited supplier pools, and higher procurement risk due to global competition and production constraints and are now facing even longer lead times as suppliers struggle to keep up. The supply chain must also adapt to emerging needs like liquid cooling infrastructure, heavy-duty power distribution gear, and next-gen storage solutions, many of which are yet to be commoditized. This current supply chain scenario creates bottlenecks that can delay data centre projects, increase costs, and intensify competition for critical equipment across the industry.
From a procurement perspective, the shift to AI workloads alters the vendor landscape and cost structure. Buyers must increasingly work with specialized suppliers and negotiate for cutting-edge, often non-standard equipment. This raises unit costs and introduces greater exposure to technology refresh cycles and interoperability risks.
From a construction cost perspective, capital expenditure (CAPEX) is rising due to the need for robust electrical and mechanical systems that support higher rack densities (often exceeding 40–60 kW per rack), liquid cooling systems, and flexible, modular design frameworks. These requirements increase upfront costs across power, cooling, and structural components.
However, this increase in CAPEX is balanced by targeted AI optimised operations reducing costs such as cooling, which can account for 30-40% of a data centres total energy consumption, and greater cost efficiency in terms of dollars per megawatt ($/MW) of usable computer power. AI-capable data centres achieve higher utilization rates and better performance-per-watt by packing more processing power into the same footprint. GPU ready data centres require 1/20th the power of a CPU only data centre, and with denser servers can be housed within 1/40th building footprint, delivering significant Total Cost of Ownership (TCO) savings for a large multi node system.
This translates into improved economic density, making the investment more cost-effective over time, especially for organizations running large-scale AI training or inference workloads.
In essence, while AI infrastructure demands a more capital intensive build, it delivers greater long term efficiency and return on investment, shifting the value equation for data centre planners and procurement teams alike.
While AI workloads demand specialized, high-performance infrastructure, many enterprise applications such as web hosting, databases, transaction processing, and legacy systems, are better suited to general-purpose data centres.
Not all workloads require AI infrastructure. Many enterprise applications such as web hosting, databases, legacy systems, remain better suited to general-purpose data centres.
Traditional data centres will continue to play a role even as AI-capable data centres become more prominent.
These traditional facilities are often more cost-effective for predictable, non-AI workloads and remain critical across industries like finance, healthcare, and government.
Additionally, the high capital and operational costs of AI infrastructure make it impractical for many organizations to fully transition.
A hybrid model is emerging in APAC, acknowledging that both approaches have a role and serve a purpose. Where AI workloads are processed in specialized facilities and conventional workloads continue to run in traditional environments, this hybrid approach balances performance, cost, and flexibility, ensuring both types of data centres coexist to support a wide range of business needs.In this hybrid model, AI-specific infrastructure characterized by high-density racks, liquid cooling, and advanced power systems is deployed in dedicated zones or facilities optimized for performance-intensive tasks such as model training and inference. Meanwhile, general-purpose workloads like web hosting, databases, and legacy applications continue to operate within a traditional data centre environment, offering cost-effective scalability and lower operational complexity.
This model allows organizations to allocate resources more efficiently, aligning infrastructure investment with workload requirements. Shared services such as networking, security, and monitoring can span both environments, creating operational synergies while maintaining workload-specific optimization. From a cost and program management perspective, hybrid models introduce greater flexibility in phasing, risk mitigation, and budget allocation, enabling developers to balance capital intensity with long-term viability. Importantly, this approach also helps navigate power constraints by distributing load across facilities with varying grid readiness, and supports sustainability goals by tailoring energy strategies to workload profiles
Power and grid constraints
The ability for both AI specific and traditional models to co-exist in the market has further implications when the hidden bottlenecks of power and utility grid and supply infrastructure readiness is factored in. As power and utility infrastructure plays catch up, particularly for power intensive operations in AI specific data centres, the cost viability of AI developments versus traditional data centres becomes a real factor to consider fordevelopers and operators. A recent article published by Garvan Barry, Regional Director, North Asia, addresses these challenges in further detail.
Sustainability - specific for AI
Sustainability is no longer optional. AI-specific facilities must address environmental impacts across energy, water, and materials. Access to renewable power, efficient cooling systems. Sustainable building design, including biophilic elements and climate responsive architecture, are increasingly influencing investmentdecisions. These considerations are not only regulatory or reputational, they are economic, affecting long-term operating costs and asset value.
Design - specific for AI
Modularisation is emerging as a key strategy to accelerate delivery and manage risk. Prefabricated components and scalable design frameworks enable faster construction timelines - often within 12 months - and allow for phased deployment aligned to evolving AI workloads. This approach also supports flexibility in design, helping operators adapt to changing technologies and tenant requirements.
AI is reshaping the economics and delivery models of data centres. Developers must now plan for both performance and viability - balancing capital intensity with long-term efficiency, and ensuring infrastructure is grid-aware and future-ready. As demand for AI infrastructure accelerates, success will depend on the ability to deliver high-performance, cost-effective, and sustainable data centres at scale. The shift is not just technical - it is strategic, requiring new thinking across design, procurement, and program management.
Linesight is a global leader in providing strategic guidance and support for the delivery of world-class data center facilities tailored to meet the evolving needs of the digital economy. Partnering closely with multinational owner-operated and global wholesale data centre providers, Linesight offers strategic project, supply chain and cost management services, specifically designed to address complex operational challenges, guiding clients every step of the way in delivering efficient, cost-effective, and sustainable data centres. For more information about our data centre capabilities, please click here.
Related Insights
12 December 2025
Modularisation: A strategic shift for faster data centre delivery
26 September 2025
Unlocking the power of benchmarking in Life Sciences CAPEX planning