To Commercialize, Voice Tech Must First Solve Its ‘Cocktail Party Problem’

Our voice technologies have not been engineered to confront the messiness of the real world or the cacophony of our actual lives.

On average, men and women speak roughly 15,000 words per day. We call our friends and family, log into Zoom for meetings with our colleagues, discuss our days with our loved ones, or if you’re like me, you argue with the ref about a bad call they made in the playoffs.

Hospitality, travel, IoT and the auto industry are all on the cusp of leveling-up voice assistant adoption and the monetization of voice. The global voice and speech recognition market is expected to grow at a CAGR of 17.2% from 2019 to reach $26.8 billion by 2025, according to Meticulous Research. Companies like Amazon and Apple will accelerate this growth as they leverage ambient computing capabilities, which will continue to push voice interfaces forward as a primary interface.

Our voice technologies have not been engineered to confront the messiness of the real world or the cacophony of our actual lives.

As voice technologies become ubiquitous, companies are turning their focus to the value of the data latent in these new channels. Microsoft’s recent acquisition of Nuance is not just about achieving better NLP or voice assistant technology, it’s also about theProductivity in the construction industry has likewise remained static since 1995, primarily driven by the aging demographic of the existing labor force, the apprenticeship nature of the job, and difficulty in attracting and retaining new workers. In short, there is insufficient labor to do the job, while existing staff are becoming increasingly less productive as skilled workers that have accumulated decades of experience in their crafts are lost due to retirement.

Google has monetized every click of your mouse, and the same thing is now happening with voice. Advertisers have found that speak-through conversion rates are higher than click-through conversation rates. Brands need to begin developing voice strategies to reach customers — or risk being left behind.

Voice tech adoption was already on the rise, but with most of the world under lockdown protocol during the COVID-19 pandemic, adoption is set to skyrocket. Nearly 40% of internet users in the U.S. use smart speakers at least monthly in 2020, according to Insider Intelligence.

Yet, there are several fundamental technology barriers keeping us from reaching the full potential of the technology.

The Steep Climb to Commercializing Voice

By the end of 2020, worldwide shipments of wearable devices rose 27.2% to 153.5 million from a year earlier, but despite all the progress made in voice technologies and their integration in a plethora of end-user devices, they are still largely limited to simple tasks. That is finally starting to change as consumers demand more from these interactions, and voice becomes a more essential interface.

In 2018, in-car shoppers spent $230 billion to order food, coffee, groceries or items to pick up at a store. The auto industry is one of the earliest adopters of voice AI, but in order to really capture voice technology’s true potential, it needs to become a more seamless, truly hands-free experience. Ambient car noise still muddies the signal enough that it keeps users tethered to using their phones.

Simply selling more voice-enabled devices won’t magically solve the limitations of voice technology. There are two main challenges confronting the evolution of voice technologies: the understanding of intent and emotion, and overcoming issues associated with signal-to-noise ratios (SNR) in highnoise or crowded environments.

Do You Understand the Words Coming Out of My Mouth?

Intent has been a core, and improving, focus of most NLP technologies. Swaths of data have been collected to help voice assistants better understand intent. While voice tech has advanced in certain areas, such as customer service channels, it still faces major challenges when confronted with understanding the myriad signals from the real world.

We have been able to grow capability to understand signals of intent in closed channels that require specific understanding — valuable for doing simple tasks, knowing when to escalate a customer’s problem to a human agent, or seamlessly directing customers through a limited set of options. For the tech to be viable in real-world situations, however, it must understand a much wider variety of situations and inputs.

Voice technologies currently work in conjunction with other data points from wearables, and as we gain more signals that we can correlate, we can begin to provide more agile and robust context for greater understanding in voice technologies.

Using Human Tools to Solve Human Problems

Our voice technologies have not been engineered to confront the messiness of the real world or the cacophony of our actual lives.

The background noise and chatter challenge has been a difficult one for voice technologies to overcome. Much like intent and emotion, we have not engineered our voice technologies to parse real-world cacophony. This “cocktail party problem” is one of the greatest barriers to voice technologies reaching a level of understanding comparable to humans. Exacerbating this challenge is the fact that we simply can’t achieve adequate testing for this effect in a traditional lab environment.

The growing adoption of voice in devices and the subsequent quality and quantity of data we now have offers the prospect of finally overcoming the cocktail party problem. It will be necessary for the technology to advance to its full usefulness.

Solving these problems requires voice tech to meet the human standard for voice and match the complexities of the human auditory system. Yes, you need really good NLP and conversational AI, but this goes deeper — you have to be able to extract clean and complete signals.

When we develop voice strategies that account and solve for these challenges, the business proposition for voice becomes unavoidable. The underlying data takes on enormous value overnight. When you have a clean signal, you have access to contextual data that brands desperately need for quality customer engagements.

Such data will let you understand what type of purchasing decisions happen when a person is energetic or tired. It allows us to know what types of music should be played based on the mood. It allows us to identify speakers accurately and correlate behaviors to individuals in a household.

Better contextualization and understanding needs to be a priority so these technologies can develop past their current limitations. To unlock that realworld potential, we need to focus on real-world situations.

About the Author

Ken Sutton is CEO and co-founder of Yobe, a software company that uses edge-based AI to unlock the potential of voice technologies for modern brands.

 

Follow us on social media for the latest updates in B2B!

Image

Latest

data-driven tools
Leverage Data-Driven Tools and Local SEO for Maximum Search Engine Rankings
July 26, 2024

As businesses continue to navigate the digital landscape, data-driven tools are more crucial than ever for effective SEO strategies. Understanding and implementing the proper SEO practices can make a significant difference with evolving algorithms and competitive markets. Given that 75% of users never scroll past the first page of search results, this statistic underscores…

Read More
On-device AI
On-Device AI is Today’s Tech Innovation, Competition and Market Leadership Driver
July 26, 2024

On-device AI revolutionizes the tech landscape, making it a critical factor for industry dominance. This cutting-edge technology directly integrates advanced AI capabilities into devices, transforming consumer and enterprise applications. This shift stems from the need for improved performance, reduced latency, enhanced data privacy & security, and personalized user experiences. With advancements in neural processing…

Read More
modern supply chains
The Role of AI in Modern Supply Chains: Insights from Aaron Hatfield at Arvist
July 26, 2024

Artificial intelligence rapidly transforms modern supply chains, with companies like Arvist leading the charge. In a recent episode of Hammer Down, hosted by Mike Bush, Aaron Hatfield, the Head of Sales at Arvist, sheds light on AI’s practical applications and benefits in enhancing supply chain operations. Is AI in the supply chain a double-edged…

Read More
semiconductor manufacturing
Training New Semiconductor Manufacturing Professionals is Key to Meet Coming Domestic Manufacturing Demand
July 26, 2024

Over the past few years, the U.S. has made significant strides in semiconductor manufacturing, driven by substantial investments and strategic policies. With the CHIPS Act expected to triple domestic semiconductor manufacturing capacity by 2032, the need for a skilled workforce is more urgent than ever. This discussion explores the key question: What does the…

Read More