Bell, S. H., & Peck, L. R. (2016). On the “how” of social experiments: Experimental designs
for getting inside the black box. In L. R. Peck (Ed.), Social experiments in practice: The what,
why, when, where, and how of experimental design & analysis. New Directions for Evaluation,
152, 97–107.
7
On the “How” of Social Experiments:
Experimental Designs for Getting Inside
the Black Box
Stephen H. Bell, Laura R. Peck
Abstract
Program evaluators generally prefer to use the strongest design available to an-
swer relevant impact questions, reserving analytic strategies for use only as
necessary. Although the “simple” treatment versus control experimental design
is well understood and widespread in its use, it is our contention that creativity
in evaluation design can help answer more nuanced questions regarding what
about a program is responsible for its impacts. In response, this chapter discusses
several experimental evaluation designs that randomize individuals—including
multiarmed, multistage, factorial, and blended designs—to permit estimating
impacts for speciic policy design features or program elements. We hope that
recasting what are some long-standing but underused designs in a new light will
motivate their increased use, where appropriate. © 2016 Wiley Periodicals,
Inc., and the American Evaluation Association.
D
esign trumps analysis: Program evaluators generally prefer design-
ing their studies to generate reliable answers to their research
questions over engaging in complex post-hoc analyses to get there
(Rubin, 2008). For example, Chapter 4 (Orr and Olsen) describes how stud-
ies might be better designed to support generalization of indings. The cur-
rent chapter considers design options that are preferable to even the best
NEW DIRECTIONS FOR EVALUATION, no. 152, Winter 2016 © 2016 Wiley Periodicals, Inc., and the American Evaluation
Association. Published online in Wiley Online Library (wileyonlinelibrary.com) • DOI: 10.1002/ev.20210 97