A Holistic Approach to Voice Analytics and Optimization

Uncategorized Dec 16, 2020

Philip Hunter’s work in voice technology can be traced back to the early 90s, when he first started a developer job that involved interactive voice response. Among other roles, Phillip moved on to work with early versions of Microsoft’s Cortana before he was recruited to work with Alexa’s developer-focused skills kit. 

Since then, Phillip has expanded his work to include chatbots and has founded CCAI (Conversational Collaborative AI) Services. Phillip specializes in strategy, design, and AI-powered optimization.

The importance of voice analytics

Phillip explains that every software’s goal is to go live. However, conversational AI systems must demonstrate that they are working well. To measure this, analytics need to occur. Aspects of performance, such as user engagement and monthly usage, are looked at to judge whether set goals are achieved. 

No matter how well-designed software is, there will always be unpredictable errors that must be addressed. Strong teams will put measures in place early in the development and design phases to ensure that these issues can be identified and measured before an application goes into production. As soon as a product is live, tight deadlines and potentially upset users can put significant pressure on those working to improve it. 

Resolution and recognition errors

Recognition errors are often identified through analytics. These errors are the result of miscommunication or misunderstandings between users and an application. Recognition errors interfere with resolution, which is a primary goal in voice interactions. 

Phillip gives the example of a customer calling a bank’s call center to confirm that a specific deposit has occurred. Although requests like these seem simple to automate, the AI must gather information from the user to access a specific account through the bank’s back-end information system. If a “recognition event” in the information-gathering process goes awry—such as the application not recognizing an account number when spoken—the user may find themselves unable to proceed to resolution. 

Ideally, a customer should be able to reach a resolution without needing to escalate their request. Phillip notes that “you'll find contact centers that are focused on ‘containment.’ What they mean by that, and how it's different than resolution, is not so much whether someone gets something taken care of; it's that they stay in the system, and they don’t get to go to a human agent, which costs more.” However, Phillip is a proponent of taking a more user-centered perspective when judging whether a “resolution” has truly occurred. 

The user’s emotional journey 

Phillip stresses that in addition to analyzing and optimizing for resolution, the user’s overall experience should be examined. Continuing with the example of an automated banking system, Phillip explains that many customers calling may have high emotions going into a conversation with AI. “There's this sort of emotional journey that the person is on starting the call with; some uncertainty or maybe even anxiety or irritation.” Ultimately, the user is asked to put their trust in a non-human to alleviate some difficult emotions. 


Optimization from a holistic perspective


Once an application is live, the system’s performance must be validated to check whether it is on or off-target in terms of meeting set goals. It is common for initial numbers to be off-target. The next step is to determine the reason, or diagnosis, behind the system’s underperformance. Once a diagnosis is found, a solution can be implemented. However, not all solutions will be as successful; it is crucial to apply metrics after the solution is applied to ensure that the issue has been addressed sufficiently.


It’s common for teams to assume that many issues arise from recognition errors. However, Phillip advocates for a more holistic approach when considering optimization. The user’s emotional journey while using an application—especially if the experience is a negative one—can be an indicator of broader problems in an underperforming system. Phillip is also against “user blaming,” which can undermine finding the best solutions. Instead, Phillip posits that optimization is most successful when a “more measured and robust, rigorous approach is taken.” 

Looking towards the future

Phillip hopes that those entering his field are willing to ask a lot of “what if” questions and think outside the box, or even outside a specific application’s current restraints.

“Don't just study the platforms and what they're capable of today. But study conversation, study how people communicate with each other verbally,” says Phillip. “I am trying to build systems that are effective at even more complex tasks. And to do that, I need to have a better understanding of how conversation works, not just how the technology works.”


Sign up for the Digital Assistant Academy newsletter

Subscribe to get our latest updates and offers by email.