How Health Agencies Can Enhance Remote Care Communication

 

This week on I Don’t Care, host Kevin Stevenson sat down with Rajesh Midha, Chief Strategy & Operating Officer of Bottle Rocket, to discuss how the company has personalized digital care through smart navigation that removes the pain points patients experience.

Midha detailed how the company started in 2008 on the day after Steve Jobs announced the Apple app store. Since then, Bottle Rocket has worked with leading healthcare providers to improve their patient journey, efforts that culminated with the creation of the highest-paid patient app in the country with a 4.8 rating in the app store.

When Midha thinks of the patient journey, the most important step is awareness. How does a patient find out about your health organizations? Once the awareness occurs, how do you convert them to a patient? Once they are a patient, how do you keep them retained and engaged while having the best experience possible?

When current patients are using a healthcare platform, they are most likely needing access to the following. These present the biggest pain points when dealing with the “digital front door”:

  • Clear billing
  • Easy payment
  • Reminder notifications
  • Ability to connect to a doctor without picking up the phone

With so much change occurring, it can be hard to continuously innovate, so Midha reminded us that the healthcare and technology industries need to look at what other industries are doing for inspiration. But healthcare professionals are the only ones that can solve some of these challenges, because other industries “don’t always have the deep empathy for the reality that healthcare is really hard and you are dealing with patient lives.

“There has never been a more exciting time to be in healthcare.”

Catch up on previous episodes of I Don’t Care with Kevin Stevenson!

Follow us on social media for the latest updates in B2B!

Image

Latest

private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More
GPUs
OpenAI–Cerebras Deal Signals Selective Inference Optimization, Not Replacement of GPUs
February 18, 2026

OpenAI’s partnership with Cerebras has raised questions about the future of GPUs in inference workloads. Cerebras uses a wafer-scale architecture that places an entire cluster onto a single silicon chip. This design reduces communication overhead and is built to improve latency and throughput for large-scale inference. Mark Jackson, Senior Product Manager at QumulusAI, says…

Read More
nvidia rubin
NVIDIA Rubin Brings 5x Inference Gains for Video and Large Context AI, Not Everyday Workloads
February 18, 2026

NVIDIA’s Rubin GPUs are expected to deliver a substantial increase in inference performance in 2026. The company claims up to 5 times the performance of B200s and B300s systems. These gains signal a major step forward in raw inference capability. Mark Jackson, Senior Product Manager at QumulusAI, explains that this level of performance is…

Read More