Applied Digital Redefines Modern Computing with Unmatched Hosting Power for NVIDIA H100 Servers
As the digital landscape continues to evolve at a rapid pace, Applied Digital emerges as a key player in reshaping data center infrastructure. In a recent episode of the AI-First Business Podcast, the spotlight was on Applied Digital’s pioneering approach to meeting the burgeoning demands of modern computing.
Host Tina Yazdi welcomed Wes Cummings, CEO and Chairman of Applied Digital, to discuss the revolutionary high-power density data centers that are setting new standards in the industry. The conversation delved into the unique challenges and solutions in hosting multiple Nvidia H100 servers, a task beyond the capabilities of conventional data centers. Cummings highlighted the company’s focus on building facilities specifically designed for heavy-duty workloads like AI training, which require extensive computational power and can tolerate higher latency.
Cummings explained the technical nuances of AI model training, a process central to technologies like ChatGPT. He emphasized the need for data centers capable of handling the immense power requirements and data transfers involved in training AI models. Unlike traditional data centers, Applied Digital’s facilities are engineered to efficiently manage these tasks, even with higher latency levels. This design is particularly crucial for the training phase of AI, where large amounts of data are processed over extended periods.
Furthermore, Cummings discussed the role of these data centers in the inference phase of AI, where trained models are deployed in lower latency environments for real-time interactions. He stressed that while training can tolerate higher latencies, the inference phase demands environments with lower latency for optimal performance.