Mini-clinical evaluation exercise as a student assessment tool in a surgery clerkship: Lessons learned from a 5-year experience Luise I. M. Pernar, MD, a Sarah E. Peyre, EdD, a,c Laura E. G. Warren, MEd, c Xiangmei Gu, MS, b Stuart Lipsitz, ScD, b,c Erik K. Alexander, MD, b,c Stanley W. Ashley, MD, a,c and Elizabeth M. Breen, MD, a,c Boston, MA Background. The mini-clinical evaluation exercise (mini-CEX) used for clinical skill assessment in internal medicine provides in-depth assessment of single clinical encounters. The goals of this study were to determine the feasibility and value of implementation of the mini-CEX in a surgery clerkship. Methods. Retrospective review of mini-CEX evaluations collected for surgery clerkship students at our institution between 2005 and 2010. Returned assessment forms were tallied. Qualitative feedback comments were analyzed using grounded theory. Principal components analysis identified thematic clusters. Thematic comment counts were compared to those provided via global assessments. Results. For 124 of 137 (90.5%) students, mini-CEX score sheets were available. Thematic clusters identified comments on 8 distinct clinical skill domains. On the mini-CEX, each student received an average of 6.5 ± 2.2 qualitative feedback comments covering 4.5 ± 1.2 separate skills. Of these, 42.7% were critical. Comments provided in global evaluations were fewer (2.9 ± 0.6; P < .001), constrained in scope (0.8 ± 0.2 skills; P < .001), and rarely critical (9.1%). Conclusion. A mini-CEX can be incorporated into a surgery clerkship. The number and breadth of feedback comments make the mini-CEX a rich assessment tool. Critical and supportive feedback comments, both highly valuable, are provided nearly equally frequently when the mini-CEX is used as an assessment tool. (Surgery 2011;150:272-7.) From the Departments of Surgery a and Medicine, b Brigham and Women’s Hospital; and the Harvard Medical School, c Boston, MA TRADITIONALLY , EVALUATION OF STUDENT PERFORMANCE in core surgery clerkships relies on written and oral examinations, performance on simulated clinical examinations such as the Objective Structured Clinical Examination, and, predominantly, on global ratings provided by faculty at the end of a rotation or clerkship. 1 Despite the ubiquity and strengths of these methods of evaluation, it has been recognized that they may not be ideal for measuring student clinical performance and clini- cal competency. Specifically, although the National Board of Medical Examiners subject examination is an effective tool for assessing medical knowl- edge, performance on this examination has not shown good correlation with clinical skills. 1 Oral examinations assess medical knowledge in addi- tion to data gathering, but are plagued by the con- cern that they are inherently subjective and prone to bias. 2,3 Objective Structured Clinical Examina- tions, based on scripted encounters with standard- ized patients, are time and resource intensive. 4 Global assessments of trainees’ clinical skills pro- vided by faculty have the advantages of requiring little additional time beyond typical interactions between students and faculty. Also, they derive from performance in real situations, typically span several weeks, and highlight not only snap- shots of performance, but allow commentary on progress of the student. Despite these strengths, evaluation by global assessment is remote and reli- ant predominantly on the aggregation of different events, 1 subjective, potentially does not capture deficiencies in performance, 5 and in fact may over- estimate clinical skills. 6 Generally, none of the Supported by Departmental Funds. Accepted for publication June 14, 2011. Reprint requests: Elizabeth M. Breen, MD, Brigham and Women’s Hospital, 75 Francis Street, Boston, MA 02115. E-mail: ebreen@partners.org. 0039-6060/$ - see front matter Ó 2011 Mosby, Inc. All rights reserved. doi:10.1016/j.surg.2011.06.012 272 SURGERY