ISPUB.COM The Internet Journal of Epidemiology Volume 1 Number 2 1 of 6 From Sample Size to Effect-Size: Small Study Effect Investigation (SSEi) F Richy, O Ethgen, O Bruyere, F Deceulaer, J Reginster Citation F Richy, O Ethgen, O Bruyere, F Deceulaer, J Reginster. From Sample Size to Effect-Size: Small Study Effect Investigation (SSEi). The Internet Journal of Epidemiology. 2003 Volume 1 Number 2. Abstract A small-sized study may report over or under estimated effects of the investigated treatment in randomized controlled trials, which is called small-study effect (SSE). It is commonly related to publication bias. Notwithstanding, the intrinsic, probabilistic, component of SSE has never been assessed yet, which is the purpose of this study. A stochastic model, simulating the results of a given controlled trial with an increasing number of subjects in each group (from 1 to 200) was used. Predefined sets of input data (expected means and standard deviations) covering a full range of analytical situations were entered in a pseudo-random generator to reflect the variability of the individual's responses in each group. For each set of data, the process was repeated 200 times to take variability into account and therefore to investigate the validity of the model. Effect-size and its standard normal deviate were notably computed for each sample-size in order to determine a threshold for SSE. The median (25%; 75%; 90%) sample above which SSE was absent was 16.5 (8;30;50) subjects per group and was non- significantly impacted by the different input data sets. A sample size of 50 subjects in each group allowed for no small-study effect in 90% of the simulations. This study pointed out the fact that SSE is not only linked to selective publication, as well as the rationale to take it into account in addition to power calculation in the design of a RCT as well as in publication bias analysis. BACKGROUND The modern era of clinical epidemiology began in the late 1940's with the pioneering work of Bradford Hill. 1 Randomized controlled trials (RCT) are nowadaysbecame the keystone of a medicine progressively based on evidence. RCT both accounts for the confirmation of preliminary studies and for the evidence provided by meta-analyses. Their progressively standardized design and analytical techniques provided the public health specialist more reliable tools to assess and compare therapeutic approaches. As RCT were growing in number and, more recently, included in quantitative systematic reviews, quality assessment strategies were developed, including component approaches, evaluating selected aspects of the trial; checklists, involving lists of items and scales, providing an integrated numeric score of quality such as Jadad Score 2 . Nevertheless, little evidence is supporting their validity due to the lack of empirical research in the field. 3 A seldom considered item is the appropriateness of clinical epidemiology and statistics and their interpretation, also called type III errors. One of the most misunderstood and poorly investigated type III error is called “Small Study Effect” (SSE): the fact that small-sized trials in terms of included subjects are prone to provide over- (or under-) estimated results 4 . SSE is linked to several concepts. First, publication bias was reported in the late eighties and shown to lead to the preferential publication of small trials showing a statistically significant effect of the investigated treatment 5 , 6 . Secondly, it was noticed that, when compared to smaller ones, larger studies might give a subtly differential intensity of interventions (i.e. lower doses), or show differences in the underlying confounding factors or symptom patients characteristics at inclusion (milder ones) 7 , 8 , 9 which provides more conservative estimates than small-sized ones. Thirdly, the probabilistic component of SSE may be due to a high variability of estimates when the number of observations is low. This may provide pseudo-random fluctuations of these parameters and a biased estimation of