AND RESEARCH NOTES COMMUNICATIONS ROLAND T. RUST, DONALD R. LEHMANN, and JOHN U. FARLEY* A central assumption of meta-analysis is that the sample of studies fairly rep- resents all work done in the field, published and unpublished. However, if studies with "poor" results are less likely to be published, a potential publication bias is present. The authors propose a maximum likelihood approach to estimating pub- lication bias for the situation in which censorship based on effect size may occur. An explicit hypothesis test is provided for testing whether or not censorship is pres- ent. The method also simultaneously estimates the proportion of studies censored, the threshold post which censorship is avoided, and the probability of censorship if a potential observation is under the censorship threshold. Two published meta-anal- yses are examined and some publication bias is found in each, but no publication bias is detected in a meta-analysis of proprietary research data. Estimating Publicotion Bias in Meta-Analysis In recent years, meta-analysis has become popular as a method of generalizing the findings of a cross-section of marketing studies (Farley and Lehmann 1986). The quality of generalizations available from a meta-analysis depends on how representative the available studies are of both the present research base and a reasonable range of research environments. As a field matures, "publication bias" may be an in- *Roland T. Rust is Professor and area head of Marketing, Owen Graduate School of Management, Vanderbilt University. Donald R. Lehmann is George E. Warren Professor of Business and John U. Farley is R. C. Kopf Professor of International Business, Columbia University. The authors thank Robert A. Peterson for generously providing data used in the study. 220 creasing problem-the tendency of journals to accept only strong effects or statistically significant fmdings may lead to an upward bias in magnitude of reported effects. As evidence accumulates, results that depart from those of past studies are looked at more suspiciously by review- ers, who are often authors of previous studies. Also, only discriminating studies may be published because of ex- tensive replication. Authors conditioned by the referee- ing process may suppress or at least cull out results that show relatively small or insignificant effects. Finally, "better" journals may impose what appear to be more rigorous standards, which can lead to further suppression of "weak" results. This practice has been referred to as the "file drawer problem" (Rosenthal 1979) and it has been demonstrated empirically by surveys (Chase and Chase 1976; Greenwald 1975). A good review by Begg and Berlin (1988) documents the seriousness of publi- cation bias. Journal of Marketing Research Vol. XXVII (May 1990), 220-226