Best Practices in 911 Quality Assurance

Written by KOVA Corp

In 2015, after nearly 50 years in existence, the 911 Association (otherwise known as NENA: the National Emergency Number Association) introduced the new standard for Quality Assurance and Quality Improvement.

The quest for a unified standard was long in coming and today constitutes the backbone of best practices for 911 operation centers and the staff comprise it.

Even so, the Standards left open to interpretation many of its recommendations (answering “what’ but not necessarily “how”), and an exploration and strategic “unpacking” of some of NENA’s central tenets is warranted for further development and delivery of quality-controlled 911 service.

The 911 operators who receive incoming calls are at the “front lines” of dispatch and are a critical component of service quality. The degree to which operators skillfully and comprehensively navigate incoming calls - and the degree to which they are capable of improving their own performance to meet quality criteria - is the degree to which optimal service can be rendered.

As with the improvement of any skill, review of prior performance is necessary in order to learn from mistakes of commision or omission and adjust performance accordingly. It is important for quality assurance (QA) personnel (those who oversee and manage performance standards and ongoing training for 911 operators) to review the entire call with the operator (and not just the intake portion).

The reason for this is that there are many opportunities prior to the intake portion of the call - and following it - to handle the call in such a way that the intake portion is adversely affected. Without operators and QA personnel reviewing the entirely of the call together, an adequate understanding of why a call has not been optimally navigated is difficult if not impossible to grasp, leaving both QA personnel and operators at a loss for how to improve their performance.

It is also important that QA personnel engage operators in a way that conveys that monitoring and random review of calls (a minimum of 2% of all calls taken) is not a way of “spying on” or “micromanaging” them but rather a means by which they can reap the benefit of being informed by their own performance, thus providing a basis for improvement that results in not only better service for callers but in greater ease and efficiency for the operator. (QA monitoring and review must be framed as a win-win proposition for both callers and operators.)

Ideally, operators should be given “freedom within a framework.” In other words, though certain objective criteria must be met that allow little if any room for interpretation or deviation (such as identification of the nature and location of an emergency), there are other more subjective variables that may not apply equally to every call, and operators should be free to use their own best judgment in negotiating these variables.

Assessment criteria and scoring of QA monitoring and review should be transparent to operators and include both objective and subjective factors. Without transparency, operators are put in the unfair position of conforming to QA criteria that is invisible to them (they are “shooting blind”) and will be unable to make connections between their scoring and performance, thus precluding them from targeted improvement.

Without assessment of and transparency to both objective and subjective factors (for example, whether the address is correctly received and how calm the operator remains in the face of a caller’s agitation), critical interrelationships between the two will remain ambiguous and unavailable for continued attention and improvement.

QA personnel also do well to be aware of various “learning styles” through which operators may be most “available” to coaching and improvement. For example, some operators may be visual learners who will make the most of their reviews if they are given a graph, pie-chart, or other “visual aid” of their performance.

Other operators may be audio learners, who will benefit most directly from listening to their call -  being vocally prompted beforehand to listen for the fulfillment - or lack thereof - of specific criteria.

“Relational” learners - those who learn primarily through the emotional rapport between them and QA personnel - will benefit enormously from a warm, supportive, “I believe in you” attitude on the part of the QA agent.

QA personnel and QA programs should “build in” each of these and other learning styles to facilitate the training and improvement of the performance of a diversity of learners.

Reviews of monitored calls should be timely and provide opportunity for feedback - not only from QA personnel but from operators. As front line emergency personnel, operators have an “inside” view or “boots on the ground”  perspective that QA personnel stand to be continually informed by.

Reviews should not be “monologues” or one-way communications between QA personnel and operators, but ongoing dialogues (two-way communications) between them in which both stand to be generatively informed by the other in the service of ongoing recalibration and refinement of QA standards themselves.

Last, QA systems that incorporate the latest in data analytics to detect overarching trends across calls are of paramount importance. Even with optimal allocation of resources, human QA personnel can monitor and productively review only a tiny fraction of overall calls. (The reason that NENA issued a benchmark of 2% of total calls monitored is not because this is an ideal number for quality assurance assessment and improvement-coaching - far from it - but because 2% is the uppermost limit with which most QA personnel can reasonably contend.)

Advanced data analytics systems can successfully track and monitor 100 percent of calls, thus identifying critical QA-related trends that are only detectable across large sample sizes. Additionally, while a human QA agent can identify, interpret, and effectively manage perhaps two dozen criteria, data analytics can identify, interpret, and effectively manage hundreds of criteria (while generating new criteria through “smart” algorithmics), “compressing” these criteria into manageable categories scaled by order of priority toward delivering optimal service.

At KOVA, you’ll find the data analytics that are central to NENA’s best practices and that lead to the highest levels of service while streamlining operations and freeing up valuable resources. Contact us today and see why we’re the industry leader in digital service solutions.

Is Your Organization Ready to Optimize their Public Safety Systems?