How the New Amtrak Acela Fleet Will Enable Business

Photos courtesy of Amtrak.

In 2018, 3.4 million passengers rode on Amtrak’s Acela trains. Many of these passengers did so for business travel in the heavily-populated Northeast Corridor between Boston and Washington, D.C.

Starting in 2021, these passengers will have a new ride.

This week Amtrak revealed photos of its upcoming Acela fleet, set to debut in the Northeast Corridor.

Personal USB ports and adjustable reading lights at each seat will better facilitate business people using their mobile devices.

The engineer’s cab on the new Acela trains will include ’tilting technology,’ which give the fleet the ability to travel at higher speeds, even on curves in the track. Safety is still paramount, and Amtrak says the trains will feature real-time monitoring as well as grab bars and handles to help passengers move throughout each car.

The Cafe Car utilizes digital signage for menus, and the new nest area (below) gives passengers an area to congregate out of their seat. Digital screens will also provide useful information here.The first class cabin includes extra legroom and larger seats. New winged headrests are designed to make sleeping easier as well.Each train will hold 25 percent more seats than previous models. The seats are also to be constructed out of recycled leather. The company also says the fleet is designed to reduce energy consumption by at least 20 percent.Each seat will come equipped with a dual-sized tray table, better suiting each passenger’s needs for food or digital devices.

To build the new trainsets, Amtrak says it created more than 1,300 new jobs and will make 95 percent of its components in the United States.

For the latest transportation news, head to our industry page. You can also follow us on Twitter @TransportMKSL. Join the conversation on our LinkedIn Transportation Market Leaders page!

Follow us on social media for the latest updates in B2B!

Image

Latest

AI costs
QumulusAI Brings Fixed Monthly Pricing to Unpredictable AI Costs in Private LLM Deployment
February 18, 2026

Unpredictable AI costs have become a growing concern for organizations running private LLM platforms. Usage-based pricing models can drive significant swings in monthly expenses as adoption increases. Budgeting becomes difficult when infrastructure spending rises with every new user interaction. Mazda Marvasti, CEO of Amberd, says pricing volatility created challenges as his team expanded its…

Read More
GPU infrastructure
Amberd Moves to the Front of the Line With QumulusAI’s GPU Infrastructure
February 18, 2026

Reliable GPU infrastructure determines how quickly AI companies can execute. Teams developing private LLM platforms depend on consistent high-performance compute. Shared cloud environments often create delays when demand exceeds available capacity Mazda Marvasti, CEO of Amberd, says waiting for GPU capacity did not align with his company’s pace. Amberd required guaranteed availability to support…

Read More
private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More