Quality of statistical reporting in developmental disability
journals
Aravind K. Namasivayam
a,b
, Tina Yan
a
, Wing Yiu Stephanie Wong
a
and
Pascal van Lieshout
a,b,c,d,e
Null hypothesis significance testing (NHST) dominates
quantitative data analysis, but its use is controversial and
has been heavily criticized. The American Psychological
Association has advocated the reporting of effect sizes
(ES), confidence intervals (CIs), and statistical power
analysis to complement NHST results to provide a more
comprehensive understanding of research findings. The aim
of this paper is to carry out a sample survey of statistical
reporting practices in two journals with the highest h5-index
scores in the areas of developmental disability and
rehabilitation. Using a checklist that includes critical
recommendations by American Psychological Association,
we examined 100 randomly selected articles out of 456
articles reporting inferential statistics in the year 2013 in the
Journal of Autism and Developmental Disorders (JADD) and
Research in Developmental Disabilities (RDD). The results
showed that for both journals, ES were reported only half
the time (JADD 59.3%; RDD 55.87%). These findings are
similar to psychology journals, but are in stark contrast to
ES reporting in educational journals (73%). Furthermore,
a priori power and sample size determination (JADD 10%;
RDD 6%), along with reporting and interpreting precision
measures (CI: JADD 13.33%; RDD 16.67%), were the least
reported metrics in these journals, but not dissimilar to
journals in other disciplines. To advance the science in
developmental disability and rehabilitation and to bridge the
research-to-practice divide, reforms in statistical reporting,
such as providing supplemental measures to NHST, are
clearly needed. International Journal of Rehabilitation
Research 38:364–369 Copyright © 2015 Wolters Kluwer
Health, Inc. All rights reserved.
International Journal of Rehabilitation Research 2015, 38:364–369
Keywords: developmental disability, effect size,
null hypothesis significance testing, statistical reporting
a
Oral Dynamics Laboratory, Department of Speech-Language Pathology,
b
Toronto
Rehabilitation Institute (TRI),
c
Institute for Biomaterials and Biomedical
Engineering (IBBME),
d
Rehabilitation Sciences Institute, University of Toronto,
Toronto and
e
Human Communications Laboratory (HCL), Department of
Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada
Correspondence to Aravind K. Namasivayam, PhD, Oral Dynamics Laboratory,
Department of Speech-Language Pathology, University of Toronto, 160-500
University Avenue, Toronto, ON, Canada M5G 1V7
Tel: + 1 416 946 8552; fax: + 1 416 978 1596;
e-mail: a.namasivayam@utoronto.ca
Received 7 April 2015 Accepted 19 September 2015
Introduction
Researchers and clinicians in the area of developmental
disability and rehabilitation seek a variety of sources for
acquiring and disseminating information. These may be
newsletters, blogs, websites, and most importantly, peer-
reviewed journals (Chan et al., 2014). Null hypothesis
significance testing (NHST) dominates quantitative data
analysis in behavioral, social, and life-science journals
(Fidler et al., 2005; Kalinowski and Fidler, 2010). For
example, NHST was reported in more than 95% of the
articles published in leading psychology journals (from
1998 to 2006; Cumming et al., 2007). However, NHST is
often misconceived and is the topic of numerous con-
troversies [Kalinowski and Fidler, 2010; for more details
on this, see Vicente and Torenvliet (2000)]. In essence,
statistical significance testing and inferential statistics aim
to assess whether or not a ‘positive’ finding can be
because of chance in the case where the null hypothesis
is valid, but do not address the magnitude of change or
the importance of the results in terms of practical or
clinical significance (Vacha-Haase and Thompson, 2004).
Several disciplines, such as psychology, medicine, and
education, have launched major campaigns to reform
statistical practices related to NHST: to either altogether
avoid NHST approaches or to report NHST with sup-
plementary information such as effect sizes (ES)
(Thompson, 1996; Vicente and Torenvliet, 2000;
Henson, 2006; Fidler and Cumming, 2007).
Notably, the American Psychological Association (APA)
has advocated the reporting of ES, confidence intervals
(CIs), and extensive data descriptions, in addition to
reporting NHST results, to provide a more comprehen-
sive understanding of research findings (American
Psychological Association, 2010, p.33). The first call (in
the field of psychology) for such changes in reporting
requirements by APA was in 2001, following the
Wilkinson and APA Task Force on Statistical Inference
in 1999. Although it has been almost 15 years since the
call for changes in reporting requirements was made,
several journal reviews, including those in other fields,
indicate that reporting practices remain inconsistent (Sun
et al., 2010; Fritz et al., 2012). For example, in the top four
peer-reviewed periodicals published by the American
Speech-Language-Hearing Association, ES statistics
were reported in 27.7% (range of 13–72%) of articles
364 Original article
0342-5282 Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved. DOI: 10.1097/MRR.0000000000000138
Copyright r 2015 Wolters Kluwer Health, Inc. All rights reserved.