This Season’s Mission Will Be Making Space Relatable

 

Space to Grow is back for a second season with hosts Chris Blackerby and Charity Weeden from Astroscale. It’s where economics, technology and sustainability in space intersect. In the opening episode, Blackerby and Weeden take a look back and forward regarding the space economy.

“We’re going to focus a lot on partnerships as a dedicated theme of conversations, as they are driving the space economy,” Blackerby said.

Since space exploration and sustainability are such complex initiatives, partnerships are critical.

Blackerby and Weeden shared some big moments in space from the last year. “Even with COVID, it was an incredible year for space,” Blackerby noted.

Weeden’s top moment was humanity in space. “Private citizens went into space. That’s a first, and everyone can relate to that.”

While that was a pivotal moment, others demonstrated risks and challenges, spawning from geopolitical impacts on Earth. One of those is the Russian ASAT (anti-satellite weapon) tests.

Turning back to good news that demonstrates cooperation and collaboration, the James Webb Space Telescope launched through a partnership between the U.S. and Europe. Weeden relayed that the investment in the space economy isn’t slowing in other good news. It’s actually growing.

The hosts then provided a preview of what’s to come this season. They’ll welcome a diverse group of guests from around the world. Topics will range from business to policy to technology.

Weeden also announced a new segment Space to Grow After Hours. “We’ll be debating topics with the pros and cons in this extra content,” she described.

More in This Series

Where Will the Money Come from in Space Sustainability?

The Evolving Diplomatic Side of Space Sustainability

Follow us on social media for the latest updates in B2B!

Image

Latest

GPU infrastructure
Amberd Moves to the Front of the Line With QumulusAI’s GPU Infrastructure
February 18, 2026

Reliable GPU infrastructure determines how quickly AI companies can execute. Teams developing private LLM platforms depend on consistent high-performance compute. Shared cloud environments often create delays when demand exceeds available capacity Mazda Marvasti, CEO of Amberd, says waiting for GPU capacity did not align with his company’s pace. Amberd required guaranteed availability to support…

Read More
private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More
GPUs
OpenAI–Cerebras Deal Signals Selective Inference Optimization, Not Replacement of GPUs
February 18, 2026

OpenAI’s partnership with Cerebras has raised questions about the future of GPUs in inference workloads. Cerebras uses a wafer-scale architecture that places an entire cluster onto a single silicon chip. This design reduces communication overhead and is built to improve latency and throughput for large-scale inference. Mark Jackson, Senior Product Manager at QumulusAI, says…

Read More