614 a comprehensive treatment of response quality (or even response consistency) issues to the extent that it omits issues such as motivation and salience. However, the context and meaning question on which it focuses is a fascinating issue, one that has been underresearched in the past but is receiving increasing attention (for ex- ample, the 1982 conference of the American Association for Public Opinion Research featured a plenary session talk entitled, "Is Item Order the Quota Sampling of the '80s?"). More important, the question of context and meaning is a central issue in how social research should be practiced. Response inconsistency is a convenient setting in which to study the issue because inconsistency reveals a context-dependent inferential process. How- ever, contextually based meaning applies to other situ- ations where it is not revealed by inconsistency. Demand characteristics, for example, can be viewed as stable re- sults of an inference process in which research partici- pants interpret the research environment and their place in it. This sort of stable context dependence is an im- portant element of some criticisms of laboratory exper- iments as a vehicle for studying social behavior. The chapters in this book are diverse, well written, and jointly cohesive enough to prompt thinking about issues beyond the immediate subject of response con- sistency. They also provide theoretical and empirical background varied enough to enrich the thinking of re- searchers directly concerned with response consistency. For these reasons, the time spent reading this brief (100 pages) book should be time well spent for almost any social researcher. ED BLAIR University of Houston STRUCTURAL ANALYSIS OF DISCRETE DATA WITH ECONOMETRIC APPLICAnONS, c. F. Manski and D. McFadden, editors. Cambridge, MA: The MIT Press, 1981. 477 pp. This book consists of 13 contributions addressing "parametric statistical inference on structural probability models in which some or all of the endogenous variables are discrete valued" (p. xvii). Two facts, at least, should make this topic attractive to the marketing researcher. First, human behavior is commonly observed as discrete events (e.g., consumer choices among brands, retail pa- tronage, etc.). Second, these events can be assumed to result from probabilistic processes, either due to an in- herent randomness of human behavior or due to the ef- fect of unobservable factors. Discrete Choice Analysis Probabilistic modeling of individual choice behavior has its origin in the field of psychometrics, with Thur- stone's Law of Comparative Judgment, Luce's Choice Axiom, and more recently Tversky's Elimination by Aspects Model. McFadden uses these three approaches as the basic frame of reference for reviewing current choice models in Chapter 5 (which would fit better as JOURNAL OF MARKETING RESEARCH, NOVEMBER 1982 the introductory chapter for this book). In this chapter, Luce's model is generalized into other constant utility models (CUM) based on axioms on how the choice probabilities for the different alternatives should relate. A more intuitive set of assumptions is made by the ran- dom utility models (RUM) derived from Thurstone's Law, according to which individuals "draw" random utility values from a family of utility functions (deter- mined by the attributes of the choice alternatives) and then choose the alternative with the highest utility. The third category includes elimination models, generalized from Tversky's model, whereby the choice process is viewed as a sequence of decisions in which alternatives are sorted out from the choice set until only the preferred one remains. Despite this categorization, McFadden demonstrates the consistency of some of the "Lucean" models with the random utility maximization assumed by RUM. An important theorem (Williams-Daly-Zachary) is presented to determine the consistency of any arbitrary probabilis- tic choice model (PCM) with the random utility max- imization assumption. The set of conditions derived from this theorem can be used to ascertain whether some of the probabilistic models based on "corrections" to Luce's basic formulation have an intuitive interpretation. An especially attractive RUM is the multinomial probit model (MNP), which allows for flexible patterns of interactions among choice alternatives. In Chapter 6, Fisher and Nagin compare two variants of this general model: (1) the linear in parameters and independent identically distributed disturbances (LPIID) model, which assumes that randomness in utilities is due to unobserv- able factors uncorrelated with observable attributes, and (2) the random coefficients covarying disturbances (RCCD) model, which attributes randomness to the ran- dom variation of attribute weights in the utility function (variation of "tastes"). The RCCD model proved to have a higher predictive power and to be less sensitive to model misspecification. The differences in predictive power, however, were significantly reduced after an in- dividual characteristic (income) was included to account for the variations in preferences (for price). This sug- gests possible uses of the RCCD model for market seg- mentation. Instead of clustering consumers by idiosyn- cratic attribute weights estimated at the individual level, a researcher may use the sample distribution of "tastes" provided by the RCCD model for attribute-based seg- mentation. If individual characteristics can be found to account for these "taste" variations, the population dis- tribution of these characteristics may also be used for segmentation. An issue related to the estimation of choice probabil- ities in the multinomial probit model is discussed by Lerman and Manski in Chapter 7. Two approximated procedures for the computation of choice probabilities (the Monte Carlo procedure proposed by Mansky, and Clark's numerical approximation for the maximum of several normals) are compared through a simulation