Effect Size Reporting in Applied Psychology: How Are We Doing? Eric M. Dunleavy American Institutes for Research Christopher D. Barr University of Houston Dana M. Glenn Transportation Security Administration Kristina Renee Miller University of Houston I believe that the almost universal reliance on merely refuting the null hypothesis as a standard method for corroborating substantive theories in the soft areas is a terrible mistake, is basically unsound, poor scientific strategy, and one of the worst things that ever happened in the history of psychology. (Meehl, 1978) Over the last decade, quantitative practices in psychological research have changed, with the role of null hypothesis significance testing (NHST) being questioned and more emphasis being placed on effect sizes. In fact, the American Psychological Association (APA) and quantitative scholars recom- mend that NHST always be accompanied by other indices, including relevant derived effect sizes and confidence intervals (Cohen, 1990; 1994; Falk & Greenbaum, 1995; Kirk, 1996). This prioritization of effect size affects aca- demics and practitioners of I-O psychology in at least two ways. First, revi- sions to the APA Publication Manual and quantitative reporting guidelines for submission in top-tier journals affect the way I-O psychologists analyze and present their results in research articles. Similarly, professional standards also prioritize effect size reporting. For example, the 4th edition of the Principles for the Validation and Use of Personnel Selection Procedures (Society for Industrial and Organizational Psychology, 2003) requires the reporting of effect sizes in all applied research where effect sizes are available. Second, effect size reporting ensures the availability of information for meta-analysis. This is important for academics and practitioners. For exam- ple, in the academic realm, meta-analytic work allows for a better under- standing of relations between constructs and appropriate research design and sampling via a priori power analysis. Meta-analytic work is also valuable for practitioners because it provides a way to evaluate the (a) practical sig- nificance of an intervention, (b) viability of transporting validity, and (c) potential adverse impact associated with a selection procedure when con- ducting an actual study is not feasible. The Industrial-Organizational Psychologist 29