Talking the talk, but not walking the walk: RT-qPCR as a paradigm for the lack of reproducibility in molecular research Stephen Bustin * and Tania Nolan † * Postgraduate Medical Institute, Faculty of Medical Science, Anglia Ruskin University, Chelmsford, Essex, UK, † Institute of Population Health, Faculty of Medical and Human Sciences, University of Manchester, Manchester, UK ABSTRACT Poorly executed and inadequately reported molecular measurement methods are amongst the causes under- lying the lack of reproducibility of much biomedical research. Although several high impact factor journals have acknowledged their past failure to scrutinise adequately the technical soundness of manuscripts, there is a perplexing reluctance to implement basic corrective measures. The reverse transcription real-time quantitative PCR (RT-qPCR) is probably the most straightforward measurement technique available for RNA quantification and is widely used in research, diagnostic, forensic and biotechnology applications. Despite the impact of the minimum information for the publication of quantitative PCR experiments (MIQE) guidelines, which aim to improve the robustness and the transparency of reporting of RT-qPCR data, we demonstrate that elementary protocol errors, inappropriate data analysis and inadequate reporting continue to be rife and conclude that the majority of published RT-qPCR data are likely to represent technical noise. Keywords Gene expression, qPCR, quantification, reverse transcription. Eur J Clin Invest 2017; 47 (10): 756–774 Background Biomedical research is supported by immense sums of public and private funding, with the NIH alone investing over US$32 billion annually [1]. This generates tens of thousands of peer- reviewed research papers every year and drives the prolifera- tion of new hypotheses, guides the direction of fresh research efforts, leads to the development of new treatments and so underpins further progress. However, the substantial increase in the number of scientific publications is also spurred by rea- sons other than the desire to communicate results to the sci- entific community, not least by the significance of researchers’ publication output to their career progression [2], given the severe competition for tenured positions [3] and research funding [4]. The importance, if not the appropriateness [5], of using impact factors to rank the quality of research is well established and the most highly cited papers are published by a small number of prestigious journals [6]. Necessarily, there are hundreds of journals that share the remaining output and publish papers that can be equally significant, especially if they report technical innovations or report results that are, at the time, controversial. Inevitably then, regardless of impact factor, a key responsibility of the journal’s editorial team is to ensure that there are procedures in place that serve as gatekeepers, ensuring that published results are broadly reproducible and, ideally, biologically relevant [7]. In theory, this is achieved through (i) editorial policies that ensure maximum trans- parency through the publication of accurate, comprehensive and explicit protocols and (ii) applying rigorous screening procedures, most often through the peer-reviewed process [8]. In practice, there are significant doubts about the validity of many research claims [9] in the context of a flawed research infrastructure that encourages disregard for responsible scien- tific process, regulation, transparency and reporting [10]. Con- fidence in quantitative measurements depends on a number of parameters, one of which is reproducibility [11]. Repro- ducibility incorporates both biological and technical variability, and as long ago as 1949, it was demonstrated that experimental test results can vary widely, even when performed by the same individual at the same time [12]. Since then, there have been numerous publications that highlight the problems of lack of reproducibility (reviewed in [13]) and the role journals play in failing to enforce their own editorial policies [14]. This, together with the fact that credibility and translation are only modestly correlated [15], explains why basic research findings are rarely adopted into clinical practice [16]. 756 ª 2017 Stichting European Society for Clinical Investigation Journal Foundation DOI: 10.1111/eci.12801 REVIEW