NanoSessions: Bringing the Best of the Cloud to Locally Hosted DMS

 

Not everything is in the cloud. In fact, businesses that handle sensitive data such as financial records, medical records, and travel documents must maintain higher network security than ever before. That’s sent these businesses off the cloud and back to on-site hosting for their digital display management systems. Until now.

NanoLumens just launched a locally-hosted version of its AWARE digital display management system. Its development was inspired by NanoLumens customers who’d asked for a one-to-one version for local hosting.

“We’ve been asked a few times from clients ‘hey is there a locally hosted version of this?’,” said Brice McPheeters, Director of Product Line Management and Customer Service, our guest on this new episode of NanoSessions, a NanoLumens podcast. “We made sure we created a true one to one interface. The exact way you interact with the cloud version was one to one with what we reproduced in the locally hosted version.”

McPheeters says ease of use is important because digital is everywhere and used by everybody. It’s not just integrators or AV professionals using display management systems.

“Everyone throws that term around but we’ve actually gone head to head with our competitors,” he said. “We’re just extremely easy to use as an operator, IT manager, or anyone who’s having to educate new users.”

For the latest news, videos, and podcasts in the Pro AV Industry, be sure to subscribe to our industry publication. A new episode of the Pro AV Show drops every Thursday.

Follow us on social media for the latest updates in B2B!

Twitter – @ProAVMKSL
Facebook – facebook.com/marketscale
LinkedIn – linkedin.com/company/marketscale

Follow us on social media for the latest updates in B2B!

Image

Latest

custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More
GPUs
OpenAI–Cerebras Deal Signals Selective Inference Optimization, Not Replacement of GPUs
February 18, 2026

OpenAI’s partnership with Cerebras has raised questions about the future of GPUs in inference workloads. Cerebras uses a wafer-scale architecture that places an entire cluster onto a single silicon chip. This design reduces communication overhead and is built to improve latency and throughput for large-scale inference. Mark Jackson, Senior Product Manager at QumulusAI, says…

Read More
nvidia rubin
NVIDIA Rubin Brings 5x Inference Gains for Video and Large Context AI, Not Everyday Workloads
February 18, 2026

NVIDIA’s Rubin GPUs are expected to deliver a substantial increase in inference performance in 2026. The company claims up to 5 times the performance of B200s and B300s systems. These gains signal a major step forward in raw inference capability. Mark Jackson, Senior Product Manager at QumulusAI, explains that this level of performance is…

Read More
autonomous trucking
Autonomous Trucking Can Shrink Coast-to-Coast Delivery Times and Increase Fleet Productivity
February 18, 2026

The idea of a self-driving 80,000-pound truck barreling down the interstate once felt like science fiction. Now, it’s operating on real freight lanes in Texas. After years of hype and recalibration, autonomous trucking is entering its proving ground. Persistent driver shortages and rising freight demand have forced the industry to look beyond incremental improvements. The…

Read More