OPINION Educational Crowdsourcing: Developing RadExam Petra J. Lewis, MBBS, Eric Nyberg, MD, Jose Cayere, MBA, Ana Valle, BA, MHS, Lawrence P. Davis, MD THE PROBLEM Radiology program directors (PDs) are required to provide residents with timely, formative feedback and guidance informed by regular evalu- ations. PDs are also required to report on resident performance to the ACGME biannually, which in- cludes milestones for “medical knowledge.” Most programs rely on the ACR’s annual Diagnostic Radi- ology In-Training examination, more commonly known as the “in- service examination,” given annually in January at regional examination centers. Results are given as an overall percentage score, relative to the R level of the resident as well as subspecialty breakdown. Unfortu- nately, although considerable effort has been made to develop this into a high-quality examination, its utility remains hamstrung by several inherent limitations: n The examination is given once a year, which means that scores on a given subspecialty will be highly influenced by a particular resi- dent’s schedule (eg, how recently the resident was on that rotation). This considerable variability con- founds interpretation of the score for that subspecialty, making useful feedback to the resident challenging. n Formative assessment should not be an isolated yearly event but should be integrated into medical education on a more regular basis, much like continuous quality improvement initiatives elsewhere in medicine [1]. n Residents are tested on knowledge they are meant to acquire over 4 years of residency; this is particu- larly problematic in years 1 and 2. n The number of questions suitable for any one R level within any one subspecialty are limited (w7-15 per subspecialty, of which many may not be suitable for any particular R level). This means that a one- or two-question dif- ference in raw score may result in large differences in relative percentile within a given R level. Although some programs have developed their own examinations, this is a time-consuming process that requires item-writing training for the questions to be psychometrically sound and a reliable software inter- face for full functionality, and the examination results are not validated in a national population. Because of these limitations, the Association of Program Directors in Radiology (APDR) and the ACR proposed collaborating to develop a high-quality web-based question bank, RadExamÔ, that could be used to develop rotation-specific examina- tions for formative assessment and feedback. The content and timing of these examinations could then be tailored to the needs and schedules of individual residents, providing more meaningful knowledge assessment for programs directors and offering important feedback to residents. To provide programs with a more versatile set of examination options and iterative testing, a data- base of an estimated minimum 3,000 questions is required. Obvi- ously, this is a monumental task for a group of volunteers to develop within a reasonable time frame. Faced with this challenge, we considered the success of sites such as Wikipedia, Khan Academy, and Radiopaedia and decided to crowd- source the question database development. We also intended to provide the participating academic community with an accessible means for clinician-educators to develop documented peer-reviewed, nation- ally distributed scholarly work that could contribute to their education portfolios for promotion. WHAT WE DID Database Development We used KnackÔ (EvenlyOdd, Lititz, Pennsylvania; https://www. knack.com), a commercial software package that facilitates the develop- ment of relational databases by nonprogrammers. In our case, the Knack platform was created to inte- grate input from hundreds of vol- unteers through a user-friendly interface and to make participation ª 2017 American College of Radiology 1546-1440/17/$36.00 n http://dx.doi.org/10.1016/j.jacr.2017.01.033 1