Changes in Nonresponse to Income Questions Ting Yan, Matthew Jans, Richard Curtin Survey Research Center, University of Michigan Abstract Nonresponse, both unit and item, is a pressing issue in survey methodology; it has a great impact on making inference to the target population and on survey costs. This paper focuses on item nonresponse to income questions, because income data, collected in almost every survey, has been associated with a large amount of missing data. We will examine changes in nonresponse to the income questions in the Survey of Consumer Attitudes (SCA). SCA is a monthly RDD study; it asks respondents first to report their income in dollar amounts with an open-ended question. For those who do not provide an answer, they are followed up with a closed- ended question with income brackets. We will take a historical approach in studying 20 years of SCA data (from June 1986 to December 2005) and examine the trend of nonresponse to the income questions in SCA over time. Analyses indicate that income item nonresponse has decreased over time, and the decline is related to the unit nonresponse rate, the refusal and refusal conversion rates, and nonresponse to other items in the survey. We interpret these findings through both sample composition and respondent motivation. The results suggest that for questions on household income, there exists a trade-off between unit and item nonresponse. Keywords: Item Nonresponse, Unit Nonresponse, Income, Panel Survey 1. Introduction Nonresponse is a significant problem for survey researchers and a key concern for survey methodologists. Nonresponse threatens sample representativeness, limits the ability to make inference about the target population, and runs the risk of incurring nonresponse bias if sample respondents are consistently different from sample nonrespondents with regard to the key analysis variables (Groves, 1989; Lessler & Kalsbeek, 1992). The underlying causes of nonresponse, however, are not fully understood. To optimally design surveys, more information is needed on the characteristics and processes that cause one person to reply to a survey request or answer a survey question, and another person to refuse. The phenomena of nonresponse encompass nonresponse at both the unit and the item level. At the unit level, household surveys have been experiencing a falling response rate over the past few decades (Atrostic, Bates, Burt, & Silberstein, 2001; Curtin, Singer, & Presser, 2005; de Heer, 1999; Hox & de Leeuw, 1994). Even though some studies find no correlation between response rates and nonresponse error (Curtin, Presser, & Singer, 2000; Keeter, Miller, Kohut, Groves, and Presser, 2000; Merkle & Edelman, 2002), other studies either postulate in theory or demonstrate empirically a link between response propensity and nonresponse error (Groves, Cialdini, & Couper, 1992; Groves, Presser, & Dipko, 2004, Groves, Singer, & Corning, 2000). Understanding the link between nonresponse and survey error is important, since efforts to reduce unit nonresponse, including incentives, extra calling, or extended field periods, have proved to be too costly to prevent continued declines. Item nonresponse is an additional risk to inference, compounding unit nonresponse. In the worst case, item nonresponse might produce nonignorable missing data – a missing data pattern correlated with the values of the variable of interest (Little & Rubin, 1987). In contrast, ignorable item missing data is a situation where data are missing completely at random; therefore, nonresponse bias is not a critical concern. However, when the item missing data is not ignorable, serious nonresponse bias could occur and standard imputation procedures might not work well to repair the nonresponse problem (cf. Little & Rubin, 1987). One survey item that tends to attract a high item nonresponse rate is income. The survey literature shows that the typical item nonresponse to income questions is around 20-40% (Moore, Stinson, & Welniak, 1999; Juster & Smith, 1997). Table 1 displays the item nonresponse rate to income questions in some household surveys conducted in the United States. The item nonresponse rate is a function of question characteristics, interviewer characteristics, and design features, such as mode of data collection, whether the survey is cross-sectional or longitudinal, and so on.) and comparison between any two numbers can not be taken literally. It is still quite apparent from the table that the prevalence of item nonresponse to income questions is generally high across surveys and across time, ranging from 14% to 35%. If an analyst employs a complete cases analysis involving income with these missing data rates, they may have to omit up to one third of the data, markedly reducing the sample size and the statistical power. Such a high nonresponse rate earns income a reputation for being a difficult and sensitive question to ask. AAPOR - ASA Section on Survey Research Methods 4270