Keep Cool with Data Center Heat Recovery Strategies

Pedro Matser started KyotoCooling some 15 years ago when he and his colleagues were asked by a data and telecommunications center in the Netherlands to find a more efficient process.

“We sat down with a group of people to come up with an energy-efficient solution,” Matser said on this episode of Not Your Father’s Data Center.

He and his colleagues ran through their options, including traditional heat recovery, which is a popular strategy in Europe. Traditional heat recovery saves heat in winter by recovering heat.

“When I looked at these techniques, I found you could use these techniques for a data center,” he said. “You don’t want to bring the air from the data center outside and exchange it for fresh air.”

Instead, two loops are created, one outside-air loop and one inside loop to transfer free cooling.

“We found the results stunning – in [The Netherlands],” Matser said, “we could save 90% of the energy required to cool the data center.”

In this episode, Matser and Jamie Nickerson, head of electrical and mechanical engineering at HED, joined host Raymond Hawkins to talk about the Kyoto Wheel by Kyoto Cooling.

Nickerson explained how the Kyoto wheel works.

“When you think about a traditional office building, most often, there is a direct air-side economizer to save resources when the outside has specific cooler conditions than inside,” Nickerson said.

As an example, he noted that, when you place hot soup in the refrigerator, not only is the environment making the soup cooler, but the soup can make the air around it warmer.

“When you have a data center, you have a lot of equipment generating a lot of heat,” Nickerson said. “We push cooler air into the space, absorbing the heat, then the air stream needs to reject the heat to continue the cycle.”

For the latest news, videos, and podcasts in the Software & Technology Industry, be sure to subscribe to our industry publication.

Follow us on social media for the latest updates in B2B!
Twitter – @MarketScale
Facebook – facebook.com/marketscale
LinkedIn – linkedin.com/company/marketscale

Follow us on social media for the latest updates in B2B!

Image

Latest

AI costs
QumulusAI Brings Fixed Monthly Pricing to Unpredictable AI Costs in Private LLM Deployment
February 18, 2026

Unpredictable AI costs have become a growing concern for organizations running private LLM platforms. Usage-based pricing models can drive significant swings in monthly expenses as adoption increases. Budgeting becomes difficult when infrastructure spending rises with every new user interaction. Mazda Marvasti, CEO of Amberd, says pricing volatility created challenges as his team expanded its…

Read More
GPU infrastructure
Amberd Moves to the Front of the Line With QumulusAI’s GPU Infrastructure
February 18, 2026

Reliable GPU infrastructure determines how quickly AI companies can execute. Teams developing private LLM platforms depend on consistent high-performance compute. Shared cloud environments often create delays when demand exceeds available capacity Mazda Marvasti, CEO of Amberd, says waiting for GPU capacity did not align with his company’s pace. Amberd required guaranteed availability to support…

Read More
private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More