CASE STUDIES IN CLINICAL PRACTICE MANAGEMENT The Effect of an Electronic Peer-Review Auditing System on Faculty-Dictated Radiology Report Error Rates Jonathan H. Chung, MD, Heber MacMahon, MD, Steven M. Montner, MD, Lili Liu, MASc, David M. Paushter, MD, Paul J. Chang, MD, Gregory L. Katzman, MD, MBA THE PROBLEM: ERRORS IN REPORTS RELATED TO VOICE RECOGNITION An interpretive narrative report is the main product of a radiologist’s work and is the most prevalent communication between the radi- ologist and the clinical team. In recent years, the importance of turnaround time for radiology re- ports has been increasingly studied and discussed [1]. Central to such research has been the increased use of voice recognition (VR) software, which is now the predominant method used for the generation of radiology reports [2]. It has been consistently shown that the use of VR decreases turnaround time [3,4]. Given the increased focus on minimizing turnaround time and the ubiquity of VR for report generation, there has been less focus on quality of reports, though many studies have demonstrated a higher rate of errors when using VR [5-7]. Error-ridden radiology reports not only confuse clinicians and create a poor impression among patients who read their reports, but they may also have medicolegal ramifications [8]. JACR has recently emphasized clarity in reports by creating a new column, “Speaking of Language,” which aims to “improve radiology reporting one meaningless or inappropriate word at a time” [9]. At our medical center, anecdotal evidence suggested that the error rate in radiology reports increased when VR was implemented. To gauge the quality of radiology reports from the standpoint of grammar, clarity, and comprehensibility, we initially implemented a manual system by which reports were proofread by a faculty member who would provide corrective feedback to the individual who generated the report. However, it quickly became obvious that this was too labor intensive and not sustainable. Thus, an IT web-based tool was created to facilitate mea- surement of the error rate for radi- ology reports within each section over time, with errors identified systematically by each attending radiologist in the department. In addition, during the period of measurement, a new version of VR was implemented. The purposes of this study were threefold: n to gauge the error rate for radi- ology reports in a tertiary academic medical center; n to determine whether mandatory, department-wide participation in peer review assessment of radiology reports would in and of itself affect the quality of radiology reports over time; and n to determine whether introduc- tion of a new VR system in those with prior experience with VR would alter the quality of radi- ology reports. We hypothesized that a minority of radiology reports would contain er- rors; that over time, radiology report error rates would improve because of peer review; and that introduction of a new VR system would lead to a small incremental improvement in radiology report error rates. WHAT WAS DONE: PEER ASSESSMENT OF ERRORS IN REPORTS Each month, 10 reports that had been dictated by each radiologist (without a trainee) were randomly selected by the IT tool, anonymized, and submitted into a queue for scoring by other anonymous radiol- ogists in the same subfield of exper- tise. Reports were scored for the presence of errors on a three-point scale: good, fair, and poor, on the basis of subjective assessment of the number and nature of errors within the report as well as whether the er- rors were thought to potentially alter the meaning of the report in a way that could be clinically significant. ª 2016 American College of Radiology 1546-1440/16/$36.00 n http://dx.doi.org/10.1016/j.jacr.2016.04.012 1215