MarketScale AEC 01/10/19: Let’s Make Your Job (and Your Life) Easier

 

Often it can feel like new technology does nothing but make our lives more complicated, but on today’s episode of the MarketScale AEC Podcast we’re going to take a look at two innovations that are making the lives of AEC professionals easier.

Michael Merle, VP of Business Development at Guerdon Modular Buildings, hops on the show to talk about the increasingly large role of modular construction. He explains their practicality and efficiency for those in the industry. Ryan Johnson, an architect with Clark Nexsen, joins the show to discuss how cloud technology has improved communication and collaboration throughout the duration of a project.

The Future of Construction

Modular construction has been around for years now, but why is it finally being embraced by developers? What are the benefits? What are the misconceptions? To help us break it all down, we sat down with Michael Merle, VP of Business Development over at Guerdon Modular Buildings. Michael knows the industry well and offers us many real-world examples of how modular construction has become more efficient and practical than ever.

Communicate & Collaborate Better Through the Cloud

Communication is an important aspect of business for any industry. When it comes to AEC, communication from architects to contractors and sub-contractors is a necessity. With the emergence of cloud-based technology, communication and collaboration has been made possible to an extent that hasn’t been seen before.

The more information we can share and the more sub-contractors are willing to take the data and use it, it’s a win-win,” says Ryan Johnson, Architect with Clark Nexsen. Johnson joins the podcast to walk through each phase of a project, from to design to bidding to construction, and explains how cloud technology is beneficial through each one.

He also helps assuage any concerns about opening data up to be viewed by others. He argues that all plans can be fully viewed once the building is finished anyways, so there’s little reason to be concerned.

For the latest news, videos, and podcasts in the AEC Industry, be sure to subscribe to our industry publication.

Follow us on social media for the latest updates in B2B!

Twitter – @AECMKSL
Facebook – facebook.com/marketscale
LinkedIn – linkedin.com/company/marketscale

Follow us on social media for the latest updates in B2B!

Image

Latest

private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More
GPUs
OpenAI–Cerebras Deal Signals Selective Inference Optimization, Not Replacement of GPUs
February 18, 2026

OpenAI’s partnership with Cerebras has raised questions about the future of GPUs in inference workloads. Cerebras uses a wafer-scale architecture that places an entire cluster onto a single silicon chip. This design reduces communication overhead and is built to improve latency and throughput for large-scale inference. Mark Jackson, Senior Product Manager at QumulusAI, says…

Read More
nvidia rubin
NVIDIA Rubin Brings 5x Inference Gains for Video and Large Context AI, Not Everyday Workloads
February 18, 2026

NVIDIA’s Rubin GPUs are expected to deliver a substantial increase in inference performance in 2026. The company claims up to 5 times the performance of B200s and B300s systems. These gains signal a major step forward in raw inference capability. Mark Jackson, Senior Product Manager at QumulusAI, explains that this level of performance is…

Read More