When business owners are in the market for call center services, their primary goal is probably pretty simple: capture every call, whether that call comes in when they’re working with another customer, out of the office, or it’s after-hours. Lead capture is essential. And in order to effectively capture a lead, the person answering the phone needs to possess certain qualities that make the caller feel like they’re speaking with someone who is genuinely interested in helping them. It seems like a no-brainer, right? If you want quality calls, hire quality agents. But is it really that easy?
Quality agents are definitely the main ingredient to successful calls, yet business owners don’t typically have a say in which call center agents are answering their phones. That means they are not able to hand-select who they believe will resonate with their customer base. Instead, hiring, training, and coaching are left almost entirely up to the call center, and it is their team’s responsibility to ensure that call quality is second to none. For owners who are used to doing nearly everything on their own, passing the baton off to a third party can be a little scary. What if the call center can’t achieve the same rapport with customers that the office staff has? Even if the agents are exactly whom they’d hire given the chance, everyone is going to have a bad day, forget a procedure, or speak with an unintended tone from time to time. This is human nature. That being said, there is a way to mitigate call handling issues. Welcome to the wide world of call center quality assurance monitoring.
The Why
The argument in favor of working with live agents over AI is also the argument against: they are not robots, which makes them sentient and fallible. Mistakes will happen. Quality assurance, or QA, is just as critical to a superior call center experience as making sure someone is there to pick up the phone. Let’s take a look at how call quality is monitored, who monitors it, and what metrics and essential critiques keep service at its zenith.
The How
Call quality is monitored in two ways: live monitoring and post-call review. Centers use a combination of these to create an overall picture of how an agent is performing.
Live Monitoring
Call center reviewers can listen in on active calls without the caller or the agent knowing that someone else is on the line. This is completed via the center’s ACD, or automatic call distributor. Agents know that live quality reviews may occur from time to time, but they do not know when. This ensures that the process is completely random. Agents do not have an opportunity to prepare or modify their behavior in any way so that reviewers get an accurate read on the agent’s tone and demeanor, as well as the presence of any bad habits, such as talking over the caller, leaving long stretches of dead air, or asking a caller to repeat themselves by not practicing active listening.
Post-Call Review
During post-call review, reviewers can listen to the recording in context, following along with the script in the process. This allows for the simultaneous evaluation of the agent’s performance and the function and flow of the call script. Reviewers will be going over the 3 L’s: Listening, Looking, and Learning.
Listening for things like:
- Did the agent read the scripted text as written?
- Did the agent ask all scripted questions?
- Did the agent verify data with the caller?
- Did the agent exercise good judgment?
- Did the agent display empathy?
- Did the agent actively listen, or did the caller have to repeat themselves?
- Did the agent utilize the FAQs and provide accurate answers to callers’ questions?
- Was the agent distracted by background noise, or was the agent in a quiet environment?
Looking at the data the agent gathered, and checking it for:
- Spelling and grammatical issues
- Proper formatting and sentence structure
- Capitalization and punctuation
- Adherence to script functions such as completing required fields, verification toggles, and not entering junk data
Learning how to improve the script based on how the call was processed:
- Was it too slow? Too fast?
- Is the text abrupt?
- Are we making scripted promises that we can’t keep?
- Are we not providing information that most callers need, such as callback time, business hours, or services offered?
The Who
Call quality may be monitored by a few different people, all of whom will have a unique approach and at times contrasting views regarding call and agent assessment.
The Clients
If the center is transparent and provides call recordings to clients, then you can bet that clients will be listening to calls from time to time. Some clients listen to every call, others may listen to a handful a week, and some may not listen to any. It all depends on the kind of time the client has and how important it is for them to get what they expect from the service. There may be thousands of accounts, not all of which require a hands-on approach. For those clients that need to review calls with a fine-tooth comb, they may have specific views on what constitutes a good call, and their feedback does not always align with that of the center’s team. So how does one know what is valid feedback and what isn’t? Well, everything is valid in a sense, but we’ll get into that more in a bit.
The Supervisors
Supervisors and team leads generally keep tabs on agents throughout their shifts by popping into their calls, whether listening on the backend, being asked for in-call assistance due to a programming or system issue, or being conferenced in to handle an escalation for an upset caller. While they may not be following along with the script during a live call, they will be listening for an agent’s general phone presence and customer service approach. This gives them a true sense of how the agent is performing, and it can immediately bring coaching points to the forefront that can be implemented in real time.
The Quality Assurance Team
Of course, monitoring agent quality necessitates the presence of a quality assurance team. The QA team’s raison d’être is to evaluate calls daily, adhering to a standard review quota for each agent, per week or per month, depending on how tasks are structured. Standardizing the number of calls reviewed and the basis on which they are reviewed ensures that all agents are receiving consistent feedback to keep them on track.
The QA team plays a pivotal role in agent successes, as they review a higher volume of calls than any other department. Without a robust QA team, it’s easy for recurring problems to fall through the cracks. For example, larger call centers may have hundreds of agents spread across multiple sites. That requires a systematic approach to the review process, with each branch reporting back to the operations team. Issues that are broadly found throughout the sites may result in specific training modules sent out to all agents, the completion of which may be required within a fixed time frame or prior to signing on for the agent’s next scheduled shift.
The Trainers
While agent training always includes plenty of test calls for every agent in the class, test calls use test scripts combined with role play, and they can’t always give agents a clear picture of what it will be like to navigate an actual call. Yes, trainees will often shadow agents taking live calls. But before that happens, the QA team’s research proves a valuable resource.
After the QA team has completed their review of any number of calls, they may earmark some of those recordings for use by the training department. Trainers then have the opportunity to offer real-life examples of best and worst practices. They’ll be able to pull up the call and listen with the trainees while they follow along with the call script. This is a great way to get agents thinking about how they would have handled the same situation, if they feel the agent who fielded the call did well, followed the script, listened carefully, and a bunch of other tidbits that can be gleaned from a recording.
In addition, the training department is responsible for continuous agent education. Rehashing commonly forgotten concepts and outlining new procedures will ultimately help the team maintain a higher level of quality and application knowledge beyond what is developed during initial onboarding. Continuing ed can be in the form of interactive quizzes, PDFs, PowerPoint presentations, visual aides affixed to workstations, or any tool that will get the point across and keep agents abreast of the latest updates and changes in protocol.
The What
The Quality Assurance team is responsible for compiling all the pertinent elements of the quality review process into one universal review form, which can be accessed by approved internal parties. While agents are graded separately in several key areas, utilizing a homogenous form with static categories and benchmarks maintains the integrity of the process. Although the number of categories evaluated and their corresponding descriptions may vary from center to center, reviewers are generally focused on these three things:
- Presentation
- Conversation Skills
- Call Management
Categories will then be drilled down into more precise metrics, each with an associated score out of a possible 100. For example:
Presentation (100%)
- Greeting (15%)
- Enunciation (25%)
- Tone (35%)
- Demeanor (25%)
Conversational Skills (100%)
- Active Listening (35%)
- Call Flow (25%)
- Empathy (25%)
- Filling Silences (15%)
Call Management (100%)
- Adhering to Script (30%)
- Following Instructions (30%)
- Data Entry (30%)
- Call Speed (10%)
Upon completion of the review form, individual questions are tallied within the larger category, and scores are aggregated and averaged into one overall grade. The scores along with reviewers’ feedback are presented to agents during one-on-one coaching sessions, which occur monthly at a minimum but may be needed weekly, if and when issues arise.
Do agents have a say?
Coaching is instrumental for obvious reasons, but with all the focus on the hallmarks of quality monitoring, there is a seemingly obvious resource to improving call quality that may be overlooked: the agents. After all, who has more up-to-the-minute experience taking calls? Agents can effect all kinds of change just by being vocal about what they are seeing on the daily, and what kinds of “quality of life” improvements will make their jobs easier. For example:
- Agent Reports: Programming errors
- Resolution: Script adjustments
- Agent Reports: No FAQs to assist callers
- Resolution: Contact client for additional details or updates
- Agent Reports: Third-party website not working
- Resolution: Tech team addresses firewall or security issues
- Agent Reports: Audio issues
- Resolution: Agent hardware or software issues addressed
- Agent Reports: System is not user-friendly
- Resolution: Development team investigates ways to upgrade platform features
Agents do not have an easy job, and their observations are welcome at every turn, inside or outside of coaching sessions!
Is all feedback valid?
When the QA team is analyzing a call that the client has reviewed, they will certainly take the client’s opinion and commentary into consideration. In the majority of cases, the QA team’s conclusion echoes that of the client’s; however, there are often no absolutes. Just like relaying a story to a group of people, each listener will have a slightly different takeaway. The reviewer’s assessment of a call may be divergent from the client’s assessment of the same call. The agent’s assessment of that call during a coaching session may bring to light questions or ideas that neither the reviewer nor the client had even considered. Let’s look at a few reasons why viewpoints may not be aligned:
- The client is grading the call based on the client’s in-depth understanding of their own business and may be expecting agents to have that same level of knowledge.
- The client is grading the call based on what they thought should have happened, but the script is not laid out in a manner conducive to that outcome.
- The client is grading the call accurately based on the script and the caller’s request, but the center may not have properly programmed the script or set expectations with the client on how to minimize agent confusion.
- The client is grading the call based on the current version of a script, but script changes may have taken place between when the call came in and when it was ultimately reviewed. If neither party is immediately aware of account updates, this could make for improper grading for both center and client.
The great news is that there is room for all perspectives! Regardless of what feedback is provided or who is providing it, it is all valid because it is a means to the same end: better calls.
- Positive and negative reviews stemming from overly complicated or over-simplified scripts may result in global changes during the new account intake and onboarding process.
- Positive and negative reviews stemming from general agent knowledge and preparedness may result in global changes to initial agent training and continuing education.
- Agents may contribute to localized account or global software changes by being honest about their experiences and what they need to ensure that every call is a winner.
QA Best Practices
Everyone has a different idea of what constitutes a good call. Sure, we can create a uniform scoring rubric. But despite a carefully measured approach, quality assurance is still subject to variability, no matter who is doing the scoring.
Not sure how to spearhead call quality monitoring for your own team? Consider these 5 Best Practices:
- Work with upper management, supervisors and trainers to develop a consistent grading system.
- Evaluate agents via both random, live-call monitoring and post-call review.
- Remember the 3 L’s: Listening, Looking, Learning.
- Recognize that the agent may make mistakes, but the agent may only be as good as the resources at their disposal.
- Keep an open mind. Respect and seek out all points of view for a comprehensive assessment!
Quality assurance is bigger than any one client, one reviewer, or one agent; in fact, it requires input from all parties. All opinions are welcome in the interest of improving call handling for existing and future clients. The more we are able to evaluate what is and is not working, the more effectively we can modify operations for the benefit of the entire organization.
Quality Assurance Call Evaluation Form Template
This quality assurance call evaluation form can be used by your business to evaluate the quality of any business telephone call. The overall agent score is calculated by determining the agents score in each of the three evaluation components: Presentation, Conversational Skills, and Call Management.
Each of the three categories above is composed of four questions with different percentage weights. Complete the form with a Yes or No response for the questions in each category to determine the overall score of the call. The simple scoring works to easily help you identify areas for call improvement, and to make sure each call is confirming to your quality standards.
Download this form and start evaluating calls to make sure any operator interactions are exceeding your expectations!
Presentation
- Were callers greeted properly?
- Did the agent speak clearly?
- Did the agent have a helpful or friendly tone?
- Was the agents demeanor professional?
Conversational Skills
- Did the agent actively listen to the caller?
- Did the agent control the call flow?
- Did the agent display empathy appropriately?
- Did the agent avoid dead air and long pauses?
Call Management
- Did the agent follow scripted text?
- Did the agent follow instructions or procedure?
- Did the agent enter data correctly?
- Did the agent process the call at an appropriate pace?
You can also download the form by clicking the image below: