Virtual Reality Brings Live Operations to Many More Medical Students

Operating rooms are designed for maximization of space. That means they are quite small, allowing a minimal number of medical students to view a live surgery, which in turn limits learning opportunities. This setup is inefficient and wasteful, to say the least. But what can be done, given the space restrictions of the average OR?

The answer is virtual reality (VR). At the University of Virginia School of Medicine, students are using a very inexpensive form of VR that allows countless students to view complex operations and invasive procedures as they take place. Using a camera in the OR and an app that creates a dual image, the student’s own smart phone can be used to view the operation.

It’s simple. The app is turned on, and the phone is placed in a cardboard viewer that looks, interestingly, like a View-Master stereoscope toy. In many ways, the same technology is being applied, except of course the images are moving. The student puts the stereoscope up to his or her eyes, and the moving images are rendered into 3-D. It’s almost like they are there.

A combination of the latest smart phone technology with a cardboard stereoscope is bringing more medical students into operating rooms so more students can learn how to do even the rarest procedures. The shortage of space in an OR may not be a critical issue for simple, oft-repeated operations, since a missed opportunity today may be resolved with a new operation of the same kind tomorrow. However, for rare procedures, live viewing access is limited. This simple VR technology makes it possible to keep ORs from being overcrowded while allowing students to optimize their learning.

Follow us on social media for the latest updates in B2B!

Image

Latest

custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More
GPUs
OpenAI–Cerebras Deal Signals Selective Inference Optimization, Not Replacement of GPUs
February 18, 2026

OpenAI’s partnership with Cerebras has raised questions about the future of GPUs in inference workloads. Cerebras uses a wafer-scale architecture that places an entire cluster onto a single silicon chip. This design reduces communication overhead and is built to improve latency and throughput for large-scale inference. Mark Jackson, Senior Product Manager at QumulusAI, says…

Read More
nvidia rubin
NVIDIA Rubin Brings 5x Inference Gains for Video and Large Context AI, Not Everyday Workloads
February 18, 2026

NVIDIA’s Rubin GPUs are expected to deliver a substantial increase in inference performance in 2026. The company claims up to 5 times the performance of B200s and B300s systems. These gains signal a major step forward in raw inference capability. Mark Jackson, Senior Product Manager at QumulusAI, explains that this level of performance is…

Read More
autonomous trucking
Autonomous Trucking Can Shrink Coast-to-Coast Delivery Times and Increase Fleet Productivity
February 18, 2026

The idea of a self-driving 80,000-pound truck barreling down the interstate once felt like science fiction. Now, it’s operating on real freight lanes in Texas. After years of hype and recalibration, autonomous trucking is entering its proving ground. Persistent driver shortages and rising freight demand have forced the industry to look beyond incremental improvements. The…

Read More