Soundbites – Applied Digital

Applied Digital on the Move

digital

This video follows the major accomplishments of Applied Digital from their IPO in Apr 2022 until October of 2023. It includes major announcements like – Partnerships with NVidia, Supermicro and Hewlett Packard – Opening new facilities in North Dakota – Gaining and onboarding new Cloud clients – Initiation of a Cloud AI service

Read More

Latest

Applied Digital
Who is Applied Digital?

Applied Digital is a U.S. based provider of next-generation digital infrastructure, redefining how digital leaders scale high-performance compute (HPC). With dedicated and experienced leadership in the fields of power procurement, engineering, and construction, Applied Digital collaborates with local utilities to solve the problem of congestion on the power grid while stimulating the development of more…

Read More
Applied Digital
Wes Cummins on Applied Digital’s Promising Horizon

Wes Cummins,CEO of Applied Digital, shares his enthusiasm for Applied Digital’s future, emphasizing the development of a robust HPC strategy. With their first facility successfully operational in Jamestown and plans to expand further in Ellendale and other yet-to-be-announced locations, he is confident that the company’s future achievements will eclipse its impressive past.

Read More

Latest

AI costs
QumulusAI Brings Fixed Monthly Pricing to Unpredictable AI Costs in Private LLM Deployment
February 18, 2026

Unpredictable AI costs have become a growing concern for organizations running private LLM platforms. Usage-based pricing models can drive significant swings in monthly expenses as adoption increases. Budgeting becomes difficult when infrastructure spending rises with every new user interaction. Mazda Marvasti, CEO of Amberd, says pricing volatility created challenges as his team expanded its…

Read More
GPU infrastructure
Amberd Moves to the Front of the Line With QumulusAI’s GPU Infrastructure
February 18, 2026

Reliable GPU infrastructure determines how quickly AI companies can execute. Teams developing private LLM platforms depend on consistent high-performance compute. Shared cloud environments often create delays when demand exceeds available capacity Mazda Marvasti, CEO of Amberd, says waiting for GPU capacity did not align with his company’s pace. Amberd required guaranteed availability to support…

Read More
private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More