16 JULY 2010 VOL 329 SCIENCE www.sciencemag.org 282 EDUCATIONFORUM U nderstanding the scholarly develop- ment of Ph.D. students in science, technology, engineering, and math- ematics (STEM) is vital to the preparation of the scientific workforce. During doctoral study, students learn to be professional scien- tists and acquire the competencies to succeed in those roles. However, this complex pro- cess is not well studied. Research to date suf- fers from overreliance on a narrow range of methods that cannot provide data appropri- ate for addressing questions of causality or effectiveness of specific practices in doctoral education. We advocate a shift in focus from student and instructor self-report toward the use of actual performance data as a remedy that can ultimately contribute to improved student outcomes. Developing causal models that account for individual differences in skill outcomes is especially challenging, as students enter Ph.D. programs with highly variable sets of prior experiences, expectations, and available means of support. Disciplinary research skills are perhaps the most critical outcomes, as they underlie the quality of scholarly work. Other factors, like creativity and motivation, do play a role, but they require a strong foundation in appropriate theoretical and methodological knowledge to yield useful products ( 1). Although scholarly attention to doctoral education has increased substantially in the last decade, it remains a minor presence ( 2). This is especially problematic given con- cerns about the capacity of the STEM work- force and the current emphasis on account- ability within higher education. Opinions dif- fer regarding the merits of the accountability movement, but the increased demand for evi- dence of effectiveness warrants reexamina- tion of assumptions regarding current Ph.D. training practices. Methods of Doctoral Education Research Accurately gauging the effectiveness of STEM doctoral research training is essential for informed decisions about its improvement. It requires unambiguous data that reflect the development of key disciplinary competen- cies for students at all stages of their degree programs. However, there is a startling pau- city of data on the skills that students have at program entry, the trajectories of skill development during their programs, and the extent to which skills acquired during doctoral study are applicable in interdisciplinary contexts ( 3). Current knowledge of effectiveness in the doctoral education process relies almost entirely on anecdotal and self- report data: individual reflection [e.g., ( 4)]; interviews [e.g., ( 5, 6)]; and sur- veys [e.g., ( 7)]. Although self-reports can provide valid data regarding partici- pants’ attitudes, values, beliefs, and past behaviors, they are not necessarily accurate or objective measures of performance or under- lying mental processes [e.g., ( 8, 9)]. Similarly, professors’ estimates of student ability and performance are frequently biased ( 10). Studies examining Ph.D. students’ research skills as an outcome of their doctoral train- ing typically rely on opinion-based assess- ments of readiness to conduct research inde- pendently ( 5– 7, 11); coarse-grained student funding or publication rate data ( 12); or broad reputational assessments of degree programs or departments ( 13), which provide no insight into individual development. Occasionally, dissertations serve as proxies for student per- formance, but mentor and peer involvement with students’ work and the use of variable standards in evaluation limit inferences about the relative strengths and weaknesses of an individual student’s skills other than through personal opinion ( 14, 15). Individual course grades and grade point averages are not effec- tive metrics because of problems with validity and generalizability across institutions, pro- grams, and instructors ( 16). Other approaches examine employ- ment following degree attainment (e.g., U.S. National Science Foundation Survey of Doc- torate Recipients). However, academic hiring decisions are directly influenced by factors reflecting neither an individual’s skill level nor the quality of their training (e.g., the per- ceived fit of the research agenda, personality, and teaching ability to the needs of the hiring department and the qualifications of compet- ing candidates). Further, STEM fields change as the central research questions evolve, the demand for experts shifts among special- ties, and the availability of financial resources changes, limiting the conclusions that can be drawn from employment outcomes. Education Performance and Processes In contrast with other educational levels, such as kindergarten to high school (K–12) or undergraduate studies, performance-based behavioral and cognitive investigations of research skills among STEM Ph.D. students are almost nonexistent. Some empirical stud- ies of scientific problem-solving skills include graduate student populations [e.g., ( 17)], but sample sizes are small, STEM-specific dis- ciplinary skills are not the focus, and the impacts of doctoral training on skill develop- ment are largely ignored. Such investigations occur at one time point and cannot capture longitudinal skill development or knowledge integration. In contrast to medical educa- tion ( 18), for example, no research examines trends and individual differences in doctoral students’ developmental trajectories toward expertise within their respective disciplines. Thus, data cannot be leveraged to refine train- ing processes and to evaluate students against robust baseline models. This difference may be due in part to the common use of com- petency standards in both K–12 and profes- sional education, which represent stakeholder consensus on criterion-based assessments. Such agreement is not commonly explicit at the doctoral level, but several such projects have been initiated [e.g., ( 14)]. It is commonly assumed that doctoral Performance-Based Data in the Study of STEM Ph.D. Education GRADUATE EDUCATION David F. Feldon, 1 * Michelle A. Maher, 2 Briana E. Timmerman 2 Performance-based assessments of student skill development can help inform decisions about improving graduate education. CREDIT: ISTOCKPHOTO.COM *Author for correspondence. E-mail: dff2j@virginia.edu 1 University of Virginia, Charlottesville, VA 22904, USA. 2 University of South Carolina, Columbia, SC 29208, USA. Published by AAAS on July 15, 2010 www.sciencemag.org Downloaded from