Newsletter

Learning to Improve Safety
From QA to QI

In my last column, I highlighted the fundamental differences between the old, dysfunctional QA style of peer review and the more effective QI Model. Let’s now put that in the context of how healthcare leaders create the learning necessary to improve patient safety.

The figure shows the three primary modes by which you and your organization learn: from defects in the delivery of care (aka adverse events, no harm events, near misses and hazardous conditions); from taking advantage of what others have learned and what worked or didn’t work for them; and from measuring clinical performance (what the “numbers” say). These three modes parallel the themes of no blame for human error, collaboration and accountability that dominate the safety literature. Ultimately, these three sources of learning must get translated into better care processes to actually impact safety.

Many of you have participated in a collaborative learning project with folks from other organizations. This has become a common way for adopting new approaches. We also tend to get ideas by going to conferences and by reading the “literature” to learn what others have done or discovered. While we may be dependent on the research community to further expand evidence-based practice, this mode of learning needs little additional improvement.

On the other hand, we have serious difficulties with event identification. For example, most hospital event reporting systems capture only about 10 percent of adverse events identifiable by detailed record review. Since we have even bigger problems with the conventional methods of event analysis via peer review and root cause analysis (RCA), it’s easy to appreciate the potential value of improving this path to learning. And on the other side of the diagram, we’re not doing that much better in translating measurement into accountability. Most healthcare managers and leaders lack the training and skill to have the conversations that inspire others to learn and adopt new behaviors. Thus, although there certainly is good value in collaborative projects and in ongoing learning from others, most organizations need to pay more attention to adopting better methods to learn from defects and measurement. These activities are necessary to uncover and address the real problems at home.

There is also good opportunity in better connecting these various pieces. For example, it is relatively easy to measure clinical performance during peer review. More than ten times the information can be captured with minimal additional effort. Such data can be used to promote self-correcting behavior through timely performance feedback. Subjective methods of clinical performance measurement can be just as valid and useful as objective measures like CMS Core Measures, NSQUIP data, etc. Nevertheless, they are not widely understood. More on that next time.

Coming Next: Measuring Clinical Performance

Commands