Experiments in Mobile Web Survey Design Andy Peytchev and Craig A. Hill RTI International, 3040 Cornwallis Rd., Research Triangle Park, NC 27709 Abstract Self-administered surveys can be conducted over mobile web capable devices, but the literature on mobile web survey design is scarce. Often, methods developed for an established mode of data collection are applied to a new mode. However, some established methods may be inappropriate. Mobile web surveys have unique features, such as administration on small screens and keyboards, different navigation, and reaching respondents in various situations – factors that can affect response processes. Experiments were designed to address three main objectives. First, we test fundamental findings found robust across other modes, but whose impact may be diminished in mobile web surveys. Second, we test findings from experiments in (computer- administered) web surveys. Third, we experiment with the unique display, navigation, and input methods. While most findings from other modes are upheld, the small screen and keyboard on mobile devices introduce some undesirable differences in responses. Finally, we test attempts to alleviate these effects. Key Words: Mobile devices, Cell phones, Smartphones, Survey design. 1. Introduction Handheld and wireless technologies have advanced to the point where self administered surveys can now be conducted quite readily using Internet-capable mobile devices. Such devices have become increasingly widespread and their Internet capabilities widely used, with over 10% of mobile subscribers reporting being active mobile web users in countries such as the U.S., the U.K., Italy, and Spain (Nielsen Mobile, 2008). Typically, survey design in various modes has been informed through amassing systematic research, found absent for mobile web surveys. Modes of data collection for survey data differ not just in capabilities, but also in influences on survey responses. New modes for collection of survey data require research prior to implementation to minimize error, although implementation often precedes methodological work. Much can be borrowed from the research literature on other modes, yet much is likely different, and the erroneous, ill-considered or by-rote adoption of existing methods may have detrimental results. Research shows that surveys in each mode require unique considerations and designs, as demonstrated after the introduction of Computer Administered Telephone Interviewing (e.g., House and Nicholls II, 1988; Couper, 2000), Computer Assisted Personal Interviewing (e.g., Baker, Bradburn and Johnson, 1995; Couper, Hansen and Sadosky, 1997), and Computer Assisted Self Interviews (e.g., Ramos, Sedivi and Sweet, 1998; McCabe, 2004). Mobile web surveys are displayed on devices that are substantially different from the devices now in common use by interviewers and respondents (i.e., laptops or desktop PCs), both in size and functionality (Jones et al., 1999; Watters, Duffy and Duffy, 2003; Chae and Kim, 2004; Parush and Yuviler-Gavish, 2004; Sweeney and Crestani, 2006), and can reach respondents in a variety of situations and locations (e.g., Brick et al., 2007). Systematic research is needed on mobile web surveys, rather than assuming applicability of methods for other modes. There are two guided empirical approaches to developing methods for a new mode: replication of findings from other modes of data collection, and identification of likely differences that are then tested through new experiments. Replication of key findings from other modes is a relatively simple way of filling a methodological void for a new mode by linking it to a larger set of research. Different modes can share many factors that affect how questions are interpreted and how responses are edited through the same cognitive mechanisms (see Cannell, Miller and Oksenberg, 1981; Tourangeau, 1984; Strack and Martin, 1987). However, replication of experiments within the context of other modes alone is not sufficient—it omits the effect of unique features in the new mode. For example, adopting self administered paper questionnaire designs for web surveys by placing multiple questions on scrolling pages (e.g., Dillman, 2000) can faithfully replicate mail survey experiments. However, doing so can limit the ability to reduce errors of omission through automatic skips (e.g., Couper et al., 1997; Peytchev, Couper, McCabe and Crawford, 2006), minimize item nonresponse through early validation (e.g., DeRouvray and Couper, 2002; Mooney, Rogers and Trunzo, 2003), and curtail possible correlated measurement error (e.g., Peytchev, 2007). Problems such as these can be ameliorated or eliminated if a paging survey design with one or few questions per page is used. Such findings illustrate that likely differences between modes need to be identified and tested to improve practices that could be suboptimal in the new mode. Surveys can be presented using a browser-based Web application on an Internet-capable mobile device, just as is commonly done for computer-assisted self interviewing. Mobile devices differ from computer-administered web surveys in various ways, including: respondents can be in a larger variety of situations or locations that could affect cognitive processing; the devices have small displays that can limit the amount of information and affect how the survey is seen and comprehended; the methods of screen navigation and data entry are different, possibly affecting how the respondent interacts with the survey and what selections are made; and, programming functionality and application choices are often limited (thus far) compared to web surveys intended for administration on laptops or desktops. While we find absence of research of the effect of these differences on survey responses, related research literature on human-computer interaction and information processing finds that task success rates (such as correct selections) are lower on small screens (Jones et al., 1999; Parush and Yuviler-Gavish, 2004), and devices such as smartphones and PDAs with small screens and different navigation lead to less information gathering than when using computers (Sweeney and Crestani, 2006). Mobile web surveys are possible and are being piloted in large scale studies (e.g., Okazaki, 2007), but are being conducted absent of methodological research on their design. It is possible that many design decisions can be imported from other modes of data collection. This new mode shares much of the functionality of web surveys and other computer assisted self interviews, such as the ability to present pictures and to present questions on separate pages. While much of the methodology for survey design in general, and web surveys in particular, would be relevant to mobile web surveys, there are nonetheless many unique characteristics. Compared Section on Survey Research Methods – 2008 AAPOR 4298