Power, Cooling, and Risk: What It Takes to Bring a 100MW AI Data Center Online

The industry knows how to build data centers. What it’s still figuring out is how to turn on AI factories at scale. With facilities now crossing 100 megawatts—far beyond the 5 to 10 megawatt norm of traditional builds—operators are no longer just validating equipment. They’re testing whether entire systems—power, cooling, controls, and the teams behind them—can hold together under real-world conditions. As AI demand accelerates, that transition from construction to live operations is becoming one of the most critical—and least understood—phases of the data center lifecycle.

When you flip the switch on a 100-megawatt AI facility, what determines whether it runs smoothly—or starts to break?

Architects of Acceleration Volume II opens with a closer look at what it really takes to bring an AI data center online—focusing on the commissioning phase where infrastructure is tested, systems are validated, and facilities transition from construction to live operations. Host Philbert Shih, Founder and Managing Director of Structure Research, sits down with Laura Laltrello, COO, and Stephen Lattimer, VP of Design & Engineering at Applied Digital. Together, they unpack the critical and often misunderstood phase of commissioning—where AI infrastructure moves from theoretical readiness to real-world performance under load. The discussion centers on the operational, technical, and organizational realities of bringing hyperscale AI facilities online.

What you’ll learn…

  • Commissioning at scale is not a checklist—it’s a months-long systems validation process, often beginning 30–45 days after groundbreaking and intensifying in the final stretch before readiness for service (RFS).
  • Mechanical systems—especially cooling—pose the greatest operational risk, requiring constant recalibration as real-world conditions diverge from design assumptions.
  • Success hinges on coordination, sequencing, and communication, with early planning, vendor alignment, and in-person collaboration proving essential to meeting aggressive timelines.

Laura Laltrello serves as the Chief Operating Officer of Applied Digital, where she leads operational execution and strategy for large-scale AI and data center infrastructure. She brings nearly 20 years of executive leadership experience across data centers, building technologies, and energy systems, along with a strong track record of managing multi-billion-dollar P&Ls and delivering complex global projects. Prior to Applied Digital, she held senior leadership roles at Honeywell and Lenovo, where she scaled global services businesses, led enterprise operations, and drove major transformations in infrastructure and technology delivery.

Stephen Lattimer is the Vice President of Data Center Design at Applied Digital, where he focuses on the design of data center infrastructure. He previously served as a Data Center Architect at Flexential and spent nearly three decades at Sturgeon Electric in roles ranging from electrician to project lead. His career reflects deep, hands-on experience in electrical systems, project execution, and data center development. This progression from field roles to leadership gives him a practical, ground-level perspective on designing and delivering complex infrastructure projects.

Article written by MarketScale.

Recent Episodes

In Ellendale, the sound of laughter, cheers, and the thwack of pickleball paddles marked more than just a summer afternoon—it marked a testament to what a united community can accomplish. The unveiling of the new pickleball courts, a project made possible through collaboration between the town, local sponsors, and Applied Digital, wasn’t just about…

As AI adoption accelerates at an unprecedented pace—ChatGPT alone sees 2.5 billion daily prompts just two and a half years after launch—digital infrastructure is racing to keep up. At the center of this transformation are purpose-built data centers, evolving from air-cooled Bitcoin facilities to liquid-cooled “AI factories” designed to power the next generation of…

AI infrastructure is evolving at breakneck speed, and the real challenge is no longer just designing next-generation data centers—it’s executing them at scale. As demand for AI-ready facilities grows, operators must adapt to immense increases in power density, new cooling technologies, and unconventional deployment locations. Power density requirements for AI workloads are pushing the…