Greenhalgh recently described ‘Ten ways of cheating with
statistics’ [1]. Here are a few tips on how to ‘cheat’ with the
design of a clinical trial. ‘Cheat’ is actually the wrong word
– what is really meant is the clever design of a study to
increase its chances of yielding ‘positive’ results (i.e. results
that apparently demonstrate the effectiveness of the treat-
ment under scrutiny).
The simplest approach is to conduct a study without a
control group. Most conditions improve over time and
regression towards the mean will also help to normalize
parameters that were abnormal at an initial reading. Thus,
with repeated measurements of clinical endpoints, one will
almost invariably find an apparent overall improvement.
This apparent improvement can be entirely unrelated to
the therapeutic intervention applied. The trick is to ignore
this well-known fact and conclude that the treatment was
effective.
Controlled clinical trials lacking design features that
minimize bias are more prone to generate a positive
result than studies that incorporate design features such
as placebo controls, blinding and randomization [2].
Failure to blind subjects, therapists, those assessing out-
come measures and those analysing the data can all lead
to biased results and interpretation. Randomization is
used to prevent the experimenter from allocating subjects
to treatment groups in a biased way, and to achieve
groups that are balanced for important prognostic factors.
The success of randomization in terms of balance is, how-
ever, a judgement call. One way to obscure differences
between groups that favor the experimental group is to
apply a statistical test of significant difference. Such tests
are conservative because they are designed to err towards
finding no difference. They can yield no statistically
significant difference, even if there are clinically relevant
differences in important prognostic factors. The inappro-
priate use of tests of significant difference to establish
baseline comparability between groups is widespread and
often remains unrecognised.
There are other subtle methods that can be used to
demonstrate that ineffective therapies work, including
equivalence or non-inferiority trials. For example, one
could conduct an under-powered equivalence trial com-
paring an experimental therapy with a ‘gold standard’
therapy; because of the small sample size the trial would
fail to show a difference. Consequently, one could con-
clude (falsely) that the experimental therapy was as effec-
tive as the gold standard. Alternatively, one could carry out
an adequately powered equivalence trial and use an in-
effective comparator treatment. This would actually show
that both treatments are ineffective, but the trick is to
convince the reader that this evidence demonstrates the
effectiveness of the treatments.
Perhaps the most reliable way to fool people with clinical
trials is to use a comparator therapy that causes a deteriora-
tion of your primary clinical outcome measure. In a typical
parallel group design this will create an inter-group
difference favoring the experimental, ineffective treat-
ment. Consequently, you need only to convince the reader
that this was due to the effectiveness of your therapy, and
at the same time omit the fact that the comparator inter-
vention led to a deterioration of the control group.
Experienced, critical professionals will find these tech-
niques far too obvious, therefore we need to consider even
more refined ways of producing false-positive results. A
report recently published by Paterson et al. [3] provides a
subtle example of this concept. Consider a group of patients
who, at entry to a trial, are asked whether they prefer
acupuncture (treatment A) or homeopathy (treatment B)
for their condition. Those who prefer treatment A are allo-
cated to treatment arm A and those who prefer treatment
B go to arm B. Both groups are then randomized to receive
either the preferred therapy or standard GP care. Patients
99
DDT Vol. 9, No. 3 February 2004 editorial
1359-6446/04/$ – see front matter ©2004 Elsevier Ltd. All rights reserved. PII: S1359-6446(03)02912-X
How to show that an ineffective
therapy works
‘ …the aim of clinical
trials is not to
prove that therapy
X works, but to
test whether or not
it works.’
Edzard Ernst and Peter H. Canter,
Complementary Medicine,
Universities of Exeter and Plymouth
www.drugdiscoverytoday.com