DISCUSSION PAPER
Resistance is not futile, but neither is it always justified
Kirstin Borgerson PhD
Assistant Professor, Department of Philosophy, Dalhousie University, Halifax, NS, Canada
Keywords
clinical practice, compellingness,
Mark Tonelli, research relevance,
scientific validity, social value
Correspondence
Dr Kirstin Borgerson
Department of Philosophy
Dalhousie University
PO Box 15000, NS
Canada BRH4R2
E-mail: kirstin.borgerson@dal.ca
Commentary prepared for the JECP Special
Issue on Philosophy and Medicine 2013
Accepted for publication: 18 March 2013
doi:10.1111/jep.12057
Physicians do not always adjust their treatment recommendations
in response to the latest research evidence, even when the
research in question is judged to be methodologically rigorous.
In his recent paper, ‘Compellingness: assessing the practical rel-
evance of clinical research results’, Mark Tonelli identifies 12
features of clinical research that help to explain this resistance
[1]. These features, which determine how compelling research
results are to individual clinicians, are grouped into three catego-
ries: (1) epistemic factors; (2) fit with individualized care and
patient values; and (3) considerations related to the stewardship
of health care resources.
Tonelli’s project responds to frequent lamentations in the clini-
cal literature about the failures of ‘knowledge translation’ (or
whatever buzz word is favoured in your part of the world) [2].
Moreover, it does so by asking clinicians – in this case, an inter-
national working group of intensivists – what seems to make a
difference in their own decision making. This simple move has
many benefits: it starts an investigation into the research–practice
gap from the perspective of practice; it allows room for the pos-
sibility that resistance to knowledge translation might be justified
and even laudable in some cases; and it is explicitly provisional
and so encourages the addition of future insights from other
general and specialist physicians (and beyond). In what follows, I
will clarify and provide some context for the project, identify what
I see as two principal limitations of the paper, and conclude by
sketching out how some of the ideas might be extended or further
developed. I believe that Tonelli’s paper is a provocative starting
point for discussions about the justifiability of resistance to some
efforts to close the research–practice gap.
Clarifications and context
The results of research are compelling when they change the
practice patterns of clinicians [1]. By contrast, clinical research
can be assessed for its methodological quality using the widely
used GRADE (Grading of Recommendations Assessment, Devel-
opment and Evaluation) system [based on the hierarchy of evi-
dence originally developed by proponents of evidence-based
medicine (EBM)]; and guidelines can indicate the strength of a
recommendation made on the basis of research depending on the
uncertainties arising from a set of research results [3]. It is gener-
ally acknowledged that quality and strength of guideline recom-
mendations can come apart; high-quality research may lead to
weak recommendations, for instance. Although Tonelli stresses the
differences between constructing guidelines for general use
(which is not the focus of his paper) and deciding which treatments
to offer to individual patients (which is), there are some obvious
similarities in these undertakings. According to the GRADE
system, in the construction of guidelines, there are three types of
uncertainties that can weaken a recommendation: (1) uncertainties
about the balance of desirable and undesirable effects; (2) uncer-
tainties about values and preferences; and (3) uncertainties about
the efficient use of resources. The second and third of these factors
bear strong resemblance to the second and third categories of
features determining compellingness: individualized care and
patient values, and stewardship of health care resources.
On this reading, then, Tonelli appears to be constructing a more
detailed, individualized version of the criteria that determine the
strength of recommendations at the guideline level. This explains
some significant similarities between the two, and also some of the
differences – particularly those that arise in the first category
related to epistemic factors. For instance, assessments of objectiv-
ity (an epistemic factor) may lead an industry-averse clinician to
dismiss a study because of concerns about conflict of interest and
biased results, where producers of guidelines would most likely
incorporate the study into the analysis as long as the quality of the
methodology was high. Situating the project this way helps to
make sense of the relationship between compellingness and some
of the other terminology that arises in the paper, for instance
references to valuable, relevant or trustworthy evidence. But it also
raises some questions about the evidence provided in support of
the arguments.
Journal of Evaluation in Clinical Practice ISSN 1365-2753
© 2013 John Wiley & Sons Ltd, Journal of Evaluation in Clinical Practice 19 (2013) 559–561 559