https://doi.org/10.1177/0003122418806282
American Sociological Review
2018, Vol. 83(6) 1281–1283
© American Sociological
Association 2018
DOI: 10.1177/0003122418806282
journals.sagepub.com/home/asr
In the three years we have been editing ASR,
we have been impressed with the method-
ological breadth and depth of the submissions
to the journal. Among the subset of papers
that use primarily quantitative analytic strate-
gies, an equally impressive range of methods
and techniques is on display. The field has
come a long way since any of the three of us
were in graduate school and, indeed, many of
the articles we have published in our role as
editors represent the forefront of sophistica-
tion in techniques as varied as fixed and ran-
dom effects on the one end to web scraping
and text analysis on the other end. In this
editorial, we would like to focus on a set of
issues that seem to come up repeatedly in the
thousands of papers we have read. These are
not errors per se, but fall in the category of
gaps or lags between previously accepted
practices among quantitative scholars in soci-
ology and the state of the art consensus
among quantitative methodologists. These
issues happen with such frequency that we
feel compelled to offer some recommenda-
tions for future ASR submissions.
P-VALUES AND ONE VERSUS
TWO-TAILED TESTS
Debates about the utility of p-values abound
in the scientific literature. On the one hand,
those concerned about replicability and stan-
dards for new discovery argue that the thresh-
old for statistical significance should be
reduced below .05 (Benjamin et al. 2018). On
the other hand, some argue that we should do
away with p values and null hypothesis sig-
nificance testing altogether (McShane et al.
2017).
We will not take a stand in this debate
except to say that, in general, p < .10 and one-
tailed tests should only be used in rare, excep-
tional circumstances with proper justification.
Many papers attempt to justify use of p < .10
standards by pointing to “directionality” in
their verbally stated hypotheses. Others use
vague language of p < .10 indicating “border-
line” or “suggestive” findings. We do not find
the first rationale compelling. In terms of the
second practice, ASR is our discipline’s top
journal. We need to be publishing strong evi-
dence rather than “suggestive” findings.
TESTING MEDIATION
We get many submissions to the journal attempt-
ing to test mediation with a stripped down ver-
sion of the Baron and Kenny (1986) steps.
Authors usually proceed like this—they run one
model with their key predictor plus controls and
then a second model adding the mediator. If the
coefficient of the key predictor is reduced or
rendered nonsignificant, the authors conclude
that the main effect has been mediated.
There are several problems with this
approach. Most commonly, authors fail to run a
significance test for the difference in magnitude
between coefficients. This step is necessary to
determine whether mediation has occurred. The
coefficient of the key predictor can be reduced
or even rendered nonsignificant yet still be in
the window of what could be considered to have
occurred by chance alone. As Gelman and Stern
(2006) note, changes in statistical significance
may not themselves be significant.
806282ASR XX X 10.1177/0003122418806282American Sociological ReviewLizardo et al.
2018
a
University of Notre Dame
b
University of California-Los Angeles
Editors’ Comment:
A Few Guidelines for
Quantitative Submissions
Sarah A. Mustillo,
a
Omar A. Lizardo,
b
and Rory M. McVeigh
a