If it feels like the semiconductor market is suddenly back in the headlines, that’s because it is. ASML, the world’s leading supplier of photolithography systems, recently reported that its shares have risen around 97 per cent over the past six months, reflecting renewed investment in chip production. Yet behind the headlines, there’s a quieter and arguably just as critical story in managing the heat generated by both chip production and the AI hardware that depends on it, explains Ben Kitson…
Today’s cycle is unusual. Hyperscalers are pouring investment into AI data centres, driving unprecedented demand for high-performance hardware. What’s more, much of this compute equipment has already been committed, as seen on Finance.Yahoo.
Such combination is creating a perfect storm for infrastructure planning, with AI operators facing high power densities and unprecedented cooling requirements in their data centres.
Traditional data centers were designed around racks drawing roughly 5–10 kW, however AI clusters now operate at 30–50 kW per rack. What’s more, advanced GPU and accelerator platforms are already reaching 100–120 kW per rack, meaning air cooling alone is no longer sufficient.
Thermal management in focus
Thermal constraints are finally hitting the headlines. In May 2025, chip giant Nvidia said hyperscale operators were installing tens of thousands of its latest GPUs every week, a deployment rate expected to accelerate with the rollout of its “Blackwell Ultra” platform.
The company’s public roadmap indicates that its next “Rubin Ultra” architecture could place more than 500 GPUs in a single rack drawing up to 600 kW, highlighting the scale of the cooling challenge now facing AI infrastructure.
Across AI infrastructure, thermal stability has become a defining constraint not only in chip design, but also in the infrastructure required to power and cool high-density compute environments.
High-performance liquid cooling and micro-channel heat exchangers have shifted from niche solutions to essential components. The same engineering principles — precise control of fluid flow, maximising heat transfer and producing compact components with tight tolerances — now apply across multiple applications.
Engineering expertise developed in high-precision semiconductor environments is now being applied to printed circuit heat exchanger (PCHE) technology for AI data centres, a convergence between electronics manufacturing and energy infrastructure.
Why PCHEs matter
PCHEs aren’t just a fancier version of conventional designs like shell-and-tube or plate-and-frame. They’re smaller, lighter and more efficient, making them ideal where space is tight and density is high.
In data centres, this translates to more racks per square metre without compromising reliability, while reducing the energy needed to keep compute hardware cool.
Energy efficiency is another factor, with AI workloads expected to drive a significant jump in global electricity demand. Goldman Sachs projects up to a 165 per cent increase by 2030, meaning that every watt of cooling matters.
Compact, high-performing PCHEs not only save floor space, but they also help manage energy costs and improve overall power usage effectiveness, making them a critical component in high-density, hyperscale AI infrastructure.
Scaling chemical etching
The very features that make PCHEs effective, the micro-channels, high surface area designs and tight tolerances, also make them difficult to produce. Conventional machining can create prototypes but is slow, introduces burrs and is commercially unviable at scale.
Chemical etching, on the other hand, overcomes these challenges by forming all channels simultaneously across the plate. This produces stress-free, precise features, with diffusion bonding then forming the final heat exchanger plate.
Chemical etcher, Precision Micro, has produced PCHE plates since the technology’s early commercialisation in the 1990s, operating a dedicated 44,000 ft² facility capable of processing 1,000s of plates each week up to 1.5 metres long and 2mm thick. This supports volume manufacture of etched plates and makes it one of the largest sheet metal etching operations of its kind.
That’s because scaling production to thousands of plates requires tightly controlled chemistry and rigorous quality assurance. Very few suppliers globally have the expertise, capacity and process control required to manufacture etched PCHE plates at volume.
Supply chain pressures
Producing high‑volume PCHE plates is capital and process intensive. While new production capacity is emerging in Asian markets, many European and North American OEMs continue to emphasise reliability, process consistency and quality as key criteria when sourcing precision components.
Working with established regional partners can reduce logistical complexity, improve intellectual property protection and ensure consistency, especially as supply chains need to localise critical capabilities.
These etched flow plates and high‑performance heat exchangers form a crucial, if often invisible, part of the AI ecosystem. By enabling precise temperature control, they help data centres maintain high-density compute racks without overheating and ensure that AI infrastructure can scale reliably and efficiently.
That’s the hidden reality behind the renewed investment in chip production. Innovation is not driven solely by smaller transistors, new node geometries or more powerful GPUs. It also depends on the physical infrastructure that allows those technologies to operate reliably at scale.
PCHEs may not attract the same attention as chips or AI models, but they underpin the performance, efficiency and scalability of both. Where every watt and every fraction of a degree matters, precision thermal hardware is quietly enabling one of the fastest-growing technology cycles of the decade.
For more information on chemical etching, download Precision Micro’s PCHE application note here. Alternatively, contact the team at +44 (0) 121 380 0100.






