ORIGINAL ARTICLE LEADERSHIP ACR RADPEER Committee White Paper with 2016 Updates: Revised Scoring System, New Classifications, Self-Review, and Subspecialized Reports Shlomit Goldberg-Stein, MD a , L. Alexandre Frigini, MD b , Scott Long, MD c , Zeyad Metwalli, MD b, d , Xuan V. Nguyen, MD, PhD e , Mark Parker, MD f , Hani Abujudeh, MD, MBA g Abstract The ACR’s RADPEER program is currently the leading method for peer review in the United States. To date, more than 18,000 radiologists and more than 1,100 groups participate in the program. The ABR accepted RADPEER as a practice quality improvement in 2009, which can be applied toward maintenance of certification; there are currently over 2,200 practice quality improvement participants. There have been ongoing deliberations regarding the utility of RADPEER, its goals, and its scoring system since the preceding 2009 white paper. This white paper reviews the history and evolution of RADPEER and eRADPEER, the 2016 ACR Peer Review Committee’s discussions, the updated recommended scoring system and lexicon for RADPEER, and updates to eRADPEER including the study type, age, and discrepancy classifications. The central goal of RADPEER to aid in nonpunitive peer learning is discussed. Key Words: RADPEER learning, peer review, peer learning, RADPEER, nonpunitive J Am Coll Radiol 2017;14:1080-1086. Copyright Ó 2017 American College of Radiology INTRODUCTION In 1999, the Institute of Medicine (IOM) reported that medical errors accounted for nearly 100,000 preventable deaths each year in the United States alone [1]. Patient safety proved elusive, and in September 2015, a follow-up IOM report focusing on diagnostic errors cited that these errors contribute to an alarming 10% of patient deaths [2]. One of the many stated goals of that report was to “develop and deploy approaches to identify, learn from, and reduce diagnostic errors and near misses in clinical practice” [2]. In addition to the unnecessary, irreplaceable loss of life, preventable medical errors prolong the course and duration of patient hospitalization, increase patient morbidity and suffering, and further accentuate the cost of health care delivery on the order of tens of billions of dollars per year. Unfortunately, as the original IOM report high- lighted, to err is indeed human, but it was also understood that human errors may occur in a predictable pattern and frequency [1,2]. It is within this context that we provide a history of the evolution of the ACR’s RADPEERÔ and eRADPEER programs, and then describe the most recent updates of 2016, which are a deliberate attempt to facilitate peer learning, as valued by the IOM. The original IOM report did not cite medical imaging in particular as an area of medicine fraught with high error rates [1]. Nevertheless, in response to that IOM report, and in the interest of public safety and the health care community, the ACR task force established several committees to specifically examine this issue. One such a Montefiore Medical Center, Albert Einstein College of Medicine, Bronx, New York. b Baylor College of Medicine, Houston, Texas. c Southern Illinois University School of Medicine, Springfield, Illinois. d Michael E. DeBakey VA Medical Center, Houston, Texas. e Ohio State University Wexner Medical Center, Columbus, Ohio. f Virginia Commonwealth University Medical Center, Richmond, Virginia. g Cooper University Hospital of Rowan University, Camden, New Jersey. Corresponding author and reprints: Dr Shlomit Goldberg-Stein, MD, 111 East 210 Street Bronx, NY 10463; e-mail: sgoldberg@montefiore.org. The authors have no conflicts of interest related to the material discussed in this article. ª 2017 American College of Radiology 1080 1546-1440/17/$36.00 n http://dx.doi.org/10.1016/j.jacr.2017.03.023