Companies Will Shift Their Focus to Profitability for Their Path to Success

 

All companies should look at automated purchasing and compliance in today’s marketplace. Tyler Kern, and co-host Hunter Austin, Managing Partner of Kelley Austin, got some hot tips on the latest trend in automated payment control solutions from James Thomas, CEO and Founder of Itemize. 

In the B2B space, where you need a lot more data in that transaction, that hasn’t effectively been automated the same way the consumer space has been automated,” Thomas said. 

The financial control software a company uses may be automated, but often getting the information into the system is still a manual effort. Thomas believes the AP (automated payment) world is ripe for digitization. “The data we need to automate business purchasing, so there’s not a lot of people involved in moving paper and workflow systems, that’s a problem most people probably thought was solved in the twentieth century. To make that happen, we need a lot more data coming in some manner into our financial control or expense management systems.”

The opportunity exists in payment control to reduce costs and mistakes and transform precious human resources to work on higher-value activities. Thomas, whose history with the payment industry is rich with experience, founded Itemize to provide solutions for B2B because the need is extensive. “We’ve all heard and seen about the labor shortages in the United States and globally,” Thomas said. “And that labor shortage is making it incumbent upon finance departments to look towards automation as a potential way to solve the problem.”

Hosts Tyler Kern and Hunter Austin sit down with Itemize CEO and founder James Thomas to talk about the future of automated payment control solutions. 

 

With the right automated solutions, an AP specialist can increase invoice processing volume and gain time elsewhere. “We can do five-to-ten invoices in a minute,” Thomas said. “So, that person can look at the data and use it for analysis and intelligence. That’s a welcome contribution to an accounts payable department.” 

More Episodes in This Series

Companies Are More Comfortable Embracing Emerging Technologies to Boost Efficiency

The Retail Sales Dilemma and How to Solve It

Follow us on social media for the latest updates in B2B!

Image

Latest

GPU infrastructure
Amberd Moves to the Front of the Line With QumulusAI’s GPU Infrastructure
February 18, 2026

Reliable GPU infrastructure determines how quickly AI companies can execute. Teams developing private LLM platforms depend on consistent high-performance compute. Shared cloud environments often create delays when demand exceeds available capacity Mazda Marvasti, CEO of Amberd, says waiting for GPU capacity did not align with his company’s pace. Amberd required guaranteed availability to support…

Read More
private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More
GPUs
OpenAI–Cerebras Deal Signals Selective Inference Optimization, Not Replacement of GPUs
February 18, 2026

OpenAI’s partnership with Cerebras has raised questions about the future of GPUs in inference workloads. Cerebras uses a wafer-scale architecture that places an entire cluster onto a single silicon chip. This design reduces communication overhead and is built to improve latency and throughput for large-scale inference. Mark Jackson, Senior Product Manager at QumulusAI, says…

Read More