Self-Reporting
From QA to QI
In the May issue of CHPSO Patient Safety News, this column focused on the importance of learning from defects and pointed out the problem with identifying adverse events, near misses and hazardous conditions. This is not a small problem. Typically, only about 10 percent of adverse events are reported. This means that either much effort must be expended to identify such cases by other means or that many learning opportunities will be missed.
Remember that peer review is the dominant mode of event analysis in hospitals and generic screens are the dominant method by which cases are identified for peer review. These screens include hospital readmission, death, unplanned return to the OR, unplanned transfer to critical care, etc. Generic screens were initially developed to identify instances of patient harm to test whether a no-fault medical malpractice system might be viable. They have low specificity and have never been validated for use in peer review. The generic screens were, however, used in the Harvard Medical Practice Study to identify rates of harm and substandard care. The Institute for Healthcare Improvement Trigger Tool is an updated version of this method.
In the Harvard study, 26 percent of all admissions fell out on
the screens. The study’s staged review process ultimately led the
investigators to declare that 3.7 percent of admissions were
associated with patient harm and 1 percent with negligence, i.e.,
substandard care. In other words, they had to look at 26 records
to find about four instances of harm and one instance of
substandard care. This is why a large proportion of hospitals do
secondary pre-review screenings before assigning cases for peer
review. None of this is getting us to the goal of the QI Model:
To identify and act on learning opportunities to improve the
quality and safety of care. To be quite frank, as a means of
identifying cases for peer review, the generic screen process
stinks.
About 20 years ago, the aviation industry woke up to the problem
of underreporting and came to recognize that fear of reporting
was poisoning efforts to improve safety. This resulted in the
birth of aviation safety programs that granted immunity from
sanctions to pilots who made good faith safety reports. Together
with the introduction of crew resource management training, the
aviation safety programs were key to the dramatic progress that
followed. At least one study suggests that a non-punitive
environment would be critical to the ability to replicate this in
health care.
There is only one published example of a successful self-reporting program in health care. The example comes from a department of anesthesia at an academic medical center, which was able to sustain high rates of self-reporting (90 percent of cases reviewed, 70 percent of events identifiable by all means) over several years. The authors assert that, “Anesthesiologists will comply with a system of self-reporting if they understand the process, if there is institutional and departmental encouragement and support for the process, and if the process is non-punitive and can result in real improvements in patient care.”
My latest national study of peer review practices (under review for publication) found that self-reporting is beginning to be promoted more broadly. Moreover, hospitals in which the practice is taking hold are realizing the expected improvement in quality and safety.
Coming next: How to promote self-reporting