International Journal of Forecasting 7 (1991) 251-254 North-Holland 251 Research on forecasting; The International Journal of Forecasting provides critiques of papers published elsewhere. The edi- tors try to select recent papers that are likely to be of significant interest to the readers. Our review of each paper is sent to the author for comment prior to publication. Almost all authors respond with suggestions and these typically lead to improve- ments in the critiques. bination of expert and model into the component explained by the expert and the component ex- plained by the model alone. By adopting this approach Blattberg and Hoch are able to derive a term they call the ‘unexplained variance picked up by the manager’. If you know of interesting papers or if you have published such a paper, please send a copy to the Editors for possible inclusion in this section of the lnternutional Journal of Forecasting. To obtain copies of the papers reviewed in this section con- tact the authors of the original papers. To illustrate their arguments they use five com- panies, two that forecast fashion catalog sales, and three that forecast the redemption rate of com- pany-based promotions. In all cases the managers making the expert forecasts possessed a substan- tial degree of additional information apart from that included in the model. Blattberg and Hoch ascribe this to non-linear intuition, but offer no evidence that non-linearities are the root cause of the additional managerial expertise - I doubt it. Robert C. Blattberg and Stephen J. Hoch, “Database models and managerial intuition: 50% model + 50% manager”, Management Science 36 (1990) 887-899. The authors note that models such as those they developed suffer from mode1 shrinkage when moving from the estimation sample to the fore- casting application. They suggest that recent structural changes in the decision environment, which are missed by a model estimated over all the past data, can be picked up by the expert. In this article the authors have taken a somewhat unusual line in considering the problem of com- bining expert judgement with model-based fore- casts. They point out that experts are typically strongest where models are weakest and vice versa. For example, experts display biases of perception, they are overconfident, and influenced by organi- zational politics, they are inconsistent in their weighting of evidence. In contrast, models include only what the expert has identified as important, they are consistent but rigid whereas experts can adapt to changing circumstances, and experts have highly organized domain knowledge. The paper contains a discussion of the optimal weights to be given to the two sets of forecasts and confirms that equal weighting is a good heuristic. They also analyze the value of less sophisticated models, showing that even with only 50% of the variables included the model remained useful in upgrading the managerial forecasts. In essence the forecasters omit key variables (even those they themselves identify as important) and mis-weight others. In order to analyze the separate contribution that the model and the expert can make in fore- casting, the authors follow an approach similar to that adopted in the Bootstrapping literature. This involves decomposing the R2 of the optimal com- In the paper’s conclusions the authors show a degree of unease in accepting the logic of their own analysis, worrying perhaps that the models they have considered were too ‘naive’. They also considered whether the results occured because the data were observed in non-experimental situa- tions. As they correctly point out, many if not all of the experimental studies ask a trivia1 question - does the well-specified model outperform the 0169-2070/91/$03.50 0 1991 - Elsevier Science Publishers B.V (North-Holland)