Newsletter

Measuring Clinical Performance
From QA to QI

In my last column, I outlined the three primary modes of organizational learning and highlighted the opportunity to measure clinical performance during peer review. Clinical performance measures can be used to promote self-correcting behavior through timely performance feedback. They can also serve to identify performance trends among groups of clinicians. Extracting such measures greatly increases the efficiency and value of peer review. Let’s see how.

When peer reviewing a chart, physicians invariably look at the admission history and orders, the operative report (if applicable), major test reports, consultant reports and the discharge summary. In the more complicated cases, they’ll also look at progress notes, nurses’ notes, etc. In the prevailing QA mode of peer review, the learning from all this effort is reduced to a standard of care judgment: a single data point. This is a great waste.

Alternatively, a simple form can be developed to capture ratings on multiple elements of clinical performance. Figure 1 shows the structure of one such form. With this QI approach, more than 10 times the information can be captured with minimal additional effort. Moreover, even though these are subjective measures, they are much more reliable than the standard of care judgment (which is itself subjective).

The secret ingredient is the rating scale. Reliability is all about differentiation. It is not the same as agreement. If, for example, as is commonly done, we rate almost all physicians as having met the standard of care, the agreement is nearly perfect, but the reliability of the evaluation is close to zero because we haven’t differentiated shades of gray between outstanding and miserable. Up to a point, the more intervals on the scale, the greater its reliability. Most standard of care judgments are made on scales with three levels. For clinical performance measures, some authorities suggest an asymmetrical scale with seven to nine categories, having more categories describing the range of above average performance.

Regardless, the standard of care judgment is the wrong question. As we saw in QA vs. QI: The Battle Royale (CHPSO Patient Safety News, September 2011), the QI model succeeds because it is focused on finding any and all learning opportunities. Chart documentation reflects the clinical data gathering and decision-making processes. It is vitally important for communication among all care givers. It is no longer just “notes to myself.” Thus, case review is well-positioned to look at many factors considered important to good patient care for which borderline performance may contribute to problems downstream. The QI approach to measuring clinical performance enables balanced feedback and avoids making threatening judgments about competence from a single case. Such subjective measures nicely complement objective measures like CMS Core Measures, NSQUIP, resource use, etc. that are commonly included in OPPE profiles.

Quantification of clinical performance during peer review also helps to mitigate potential biases. More on that next time.

Coming next: Minimizing Peer Review Bias