Over the past weeks, several prospects and customers have asked me how they should evaluate the performance of virtual employees in their workforce. It sparks an interesting debate: Do I apply the KPIs that I use for my human workforce or do I revert back to traditional SLA language of technology evaluation? The former might sound far-fetched until you take a closer look at each of the qualities you prize most highly in your human agents and see how important these are for AI agents too if they are going to deliver the customer satisfaction and experience outcomes you are focused on.
Start with the metrics you use to evaluate your human agents
Breadth and Depth of Knowledge
How many tasks, processes and situations can an individual human agent versus getting somebody else involved for support? We often group knowledge used into domains or specialties. For example, in IT Support one agent might be specialized in identity and access, another in Windows, another in Microsoft office and so on. The broader the set of support issues the agent can handle, the more productive he or she will be. The same is true of virtual agents but here we refer to the breadth of their knowledge as ‘coverage’. Just like humans, coverage for virtual agents is tracked per domain, so we measure how many of the total identity and access management requests our virtual agent is currently able to handle. We generally strive for close to 100% per domain as that will improve first call resolution rates. Rather than throw their virtual agents in at the deep end, many organizations are looking to grow coverage over time and make sure they master one area at a time.
Level of Understanding
When human agents work on an issue that is within the scope of their knowledge, we want to know what is their resolution rate? We expect them to resolve problems in the areas where they have been trained. To be fair, however, we also have to monitor whether a failure to resolve a query was in fact due to having received some faulty information. Similarly, we look at the accuracy level of our virtual agents. Not only do we track accuracy by analyzing the number of queries that are escalated, but also the number of interactions that are abandoned. In some cases, the escalations are appropriate because business rules require human intervention in response to a query or because the query does not fall within the virtual agent’s coverage area. Based on our experience, within a few months the level of accuracy should move up to being 90% of all queries that fall within the specified coverage area.
Customer Satisfaction (CSAT)
Customer satisfaction is generally measured identically between human and virtual agents, usually by activating a simple questionnaire about the resolutions. However, because response rates to surveys are usually low, other types of measurements are used to approximate customer satisfaction. We make assumptions that first-call resolution rate, the length of on-hold or waiting time as well as time-to-resolve all have an effect on CSAT. Virtual agents hold an advantage over humans when it comes to waiting times, but interestingly we have also found that by smoothing a process out and integrating systems seamlessly, time to resolution can be notably reduced as well.
As with all statistics, you do need to dig a little deeper to understand what are driving the scores. Here’s a cautionary tale. In one client engagement we found that Amelia was consistently scoring lower than human agents in CSAT. The customer thought it must be down to the quality of the interaction but when we investigated what was behind the customer feedback we found something quite surprising. Human agents were able to give customers discounts and frequently did so, whereas Amelia was not empowered to do so. We requested a similar discretionary discount capability be allowed for Amelia. The Customer Service manager refused to allow Amelia to do this but also explained that her human colleagues were not authorized to provide discounts either. The human agents had obviously found a loop-hole in the system and were acting against policy. Once the loophole was removed, the CSAT scores for humans and digital employees realigned quickly.
For human agents metrics such as on-time arrival, availability and work ethic are important. How does that translate into important KPIs for a virtual agent? Availability of the platform is one critical measure; if the system is down, the virtual employee can’t ‘get to work’. Technical service level management, monitoring, automated recovery and disaster recovery plans are still essential factors in ensuring your virtual workforce is reliable.
In addition to individual service agent performance, there are KPIs and metrics impacting the performance of the overall teams. In many call centers attrition can be a significant issue which in turn leads to significant costs for onboarding and training. Consistency may also be hard to achieve; How many A-players, versus B and C-players, do you have in your service center? By contrast, virtual agents are thoroughly consistent and once trained never forget a thing they’ve been taught.
Any good performance review should have an improvement plan. For humans that could include anything from technical education to ethics and compliance classes. Virtual agents need improvement plans too. On most of our projects, a small team is set up to monitor all the metrics we’ve noted above on the performance of the virtual agents and determine how to prioritize the backlog to improve coverage, accuracy and customer satisfaction. Things change all the time, so there will always be a need to make adjustments. In parallel, new coverage areas for your agent continue to develop their ability to deliver the desired outcomes.
New Roles in Customer Service
Virtual agents open up the possibility to create a whole new set of roles in customer service for humans as well as their digital colleagues. For starters, the creation, maintenance and performance improvement supervision for virtual agents requires the creation of new roles that leverage new skills. For instance, top performing human agents might become escalation managers, working alongside the virtual agents to ensure KPIs are aligned across the digital and human workforce and steering teams towards new goals on an ongoing basis. Imagine having a single – very smart and knowledgeable – customer service representative managing and orchestrating 10 or more virtual agents performing all the routine, high-frequency tasks, while escalating the more complex, unique and challenging ones to human agents.
So, while implementing a digital workforce won’t completely eliminate those “uncomfortable” human performance reviews, the alignment of KPI’s across a company’s whole workforce can bring out the strengths in all their employees. The focus shouldn’t be trying to compare them with one another but combining their skills in order to meet the common goal: creating an outstanding experience for your customers.