Ex-Intel Chief’s New Startup Tackles ARM-based Server Chips

Renee James’s New Startup Tackles ARM-based Server Chips

Former Intel president Renee James is dipping her toes into the startup world. Ampere Computing, an ARM-based server chipmaker company was created in October 2017 and already has about 250 employees. The processors are modern architecture with a unique, high performance, custom core Armv8-A 64-bit server operating at up to 3.3 GHz, and a 1TB of memory at a power envelope of 125 watts, according to the company.

Built on the foundations of Applied Micro, the new company is looking to take ARM into an area with potential but minimal foothold—the 64-bit chips that power servers and storage devices in the world’s datacenters.

“There aren’t that many people in the world who build high-performance microprocessors,” said James. “And I do think we need new views on what’s next. It’s very risky, it’s very hard, but it’s incredibly rewarding. Every day you can come in with the idea you’re inventing something nobody’s done yet.”[1]

The company reports that Ampere gives customers the freedom to accelerate the delivery of some of the most memory-intensive applications, such as artificial intelligence, big data, machine learning, and databases in the cloud.

[1] http://www.oregonlive.com/silicon-forest/index.ssf/2018/02/renee_james_former_intel_presi.html

Follow us on social media for the latest updates in B2B!

Image

Latest

GPU infrastructure
QumulusAI’s GPU Infrastructure Moves Amberd to the Front of the Line
February 18, 2026

Reliable GPU infrastructure determines how quickly AI companies can execute. Teams developing private LLM platforms depend on consistent high-performance compute. Shared cloud environments often create delays when demand exceeds available capacity Mazda Marvasti, CEO of Amberd, says waiting for GPU capacity did not align with his company’s pace. Amberd required guaranteed availability to support…

Read More
private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More
GPUs
OpenAI–Cerebras Deal Signals Selective Inference Optimization, Not Replacement of GPUs
February 18, 2026

OpenAI’s partnership with Cerebras has raised questions about the future of GPUs in inference workloads. Cerebras uses a wafer-scale architecture that places an entire cluster onto a single silicon chip. This design reduces communication overhead and is built to improve latency and throughput for large-scale inference. Mark Jackson, Senior Product Manager at QumulusAI, says…

Read More