How the Workflow Can Make or Break the Coloring Process

This is In Focus, by MarketScale. A podcast by video professionals for video professionals, putting in focus the topics, teachers and tips guiding the video industry today. With your host, MarketScale’s Sr. Director of Video Production, Josh Brummett.

 

With a career spanning 17 years that began in the shipping department, Mike Nuget, a freelance colorist out of New York who’s had the fortune to work on some really extraordinary shows and videos, has, as they say, started at the bottom and is working his way to the top. And on this episode of In Focus by MarketScale, he joins host Josh Brummett to discuss his career trajectory, the pros and cons of going freelance, the stages of the color process, the evolution of camera technology, workflows, how professional coloring can elevate a project, and more.

Technically titled as a both a Colorist & Finishing Editor, Nuget remarked, “There’s days when I’m doing one role in the morning and another role at night. Then the next day I’m doing both roles all day, and that really keeps me on my toes. And I like it, too, because there’s some projects that I fit better for doing (quote/unquote) just onlining. And there are some projects where the client does their own onlining and gives me a final file and I just do the color. So, it’s good to be versatile and have the ability to offer the client either or, or both.”

Listen to Previous Episodes of In Focus Right Here!

Follow us on social media for the latest updates in B2B!

Image

Latest

GPU infrastructure
Amberd Moves to the Front of the Line With QumulusAI’s GPU Infrastructure
February 18, 2026

Reliable GPU infrastructure determines how quickly AI companies can execute. Teams developing private LLM platforms depend on consistent high-performance compute. Shared cloud environments often create delays when demand exceeds available capacity Mazda Marvasti, CEO of Amberd, says waiting for GPU capacity did not align with his company’s pace. Amberd required guaranteed availability to support…

Read More
private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More
GPUs
OpenAI–Cerebras Deal Signals Selective Inference Optimization, Not Replacement of GPUs
February 18, 2026

OpenAI’s partnership with Cerebras has raised questions about the future of GPUs in inference workloads. Cerebras uses a wafer-scale architecture that places an entire cluster onto a single silicon chip. This design reduces communication overhead and is built to improve latency and throughput for large-scale inference. Mark Jackson, Senior Product Manager at QumulusAI, says…

Read More