Capturing the Student Perspective: A New Instrument for Measuring Advising Satisfaction Marilee L. Teasley, Missouri State University Erin M. Buchanan, Missouri State University When students leave their advising appointments, how do they feel? Excited? Disappointed? If advisors and students do not share expectations and goals, the student may harbor negative feelings about the advising experience, which have the potential to lead to withdrawal and dissatisfaction. We surveyed students at a large midwestern university to see how students feel about their past and recent advising experiences. Overall, students reported satisfaction with their advising involvement, as average rating scores were high and positive. The measurement scale created to evaluate student satisfaction with advising was analyzed using exploratory and confirmatory factor analyses. This analysis showed two reliable scales: advising and out- reach functions, which may be used in the future to evaluate advising programs. [doi:10.12930/NACADA-12-132] KEY WORDS: research instruments, scale de- velopment, student satisfaction, survey, under- graduates An important facet of higher education, student retention inspires university leadership to investi- gate the extent to which their students feel connected to campus and related resources. Students utilize academic advising to make these important linkages to their institution, trusting the advisor as they transition from high school to college. Furthermore, advisor presence and support could make the difference between a frustrated withdrawal and a determined effort to graduate with honors (Drake, 2011). When investigating various factors related to student retention, Kuh (2008) pointed to the quality of advising on a college campus as among the most powerful predictors of overall campus satisfaction. Metzner (1989) found that lower attrition rates were linked to high quality advising rather than lower quality advising, but students who received some advising persisted to a greater extent than those who received no advising. McLaughlin and Starr (1982) cited numerous studies that have connected high quality academic advising to retention and persistence as well as low quality or no academic advising to dropped courses and attrition. Because advising forms an integral part of a successful educational institution, stakeholders at colleges and universities concerned with student retention must continuously monitor, develop, evaluate, and assess advising services for consis- tency and high quality. One of the most popular ways to indirectly measure the success of an academic advising program involves use of a standardized scale. However, previous publications on evaluation efforts, based on a few well-known instruments, do not show the statistical properties of those scales. For example, Alexitch (2002) and Hale, Graham, and Johnson (2009) used the Academic Advising Inventory (AAI) by Winston and Sandor (1984). The AAI is a four-part evaluation instrument that determines the levels of prescriptive and developmental advising that students are receiving, frequencies of various discussion topics discussed, student satisfaction levels, and demographic information. Others have utilized institution specific scales (e.g., Creeden, 1990; Ford, 1985; Grites, 1981; Habley, 1994) not tested for analytic fit, reliability, or validity. Some developers of evaluation initiatives have introduced new quantitative instruments comparing student preferences of advising to advising ses- sions in practice (Dickson & McMahon, 1991; Fielstein, 1989; Fielstein & Lammers, 1992; Fielstein, Scoles, & Webb, 1992), evaluating the differences between student and faculty percep- tions (Creeden, 1990; Grites, 1981; Saving & Keim, 1998; Severy, Lee, Carodine, Powers, & Mason, 1994), and measuring overall satisfaction with advising (Bitz, 2010; Kelley & Lynch, 1991; Lynch, 2004; Reinarz & Ehrlich, 2002; Smith & Allen, 2006; Zimmerman & Mokma, 2004). Additionally, Lynch (2004) investigated differences between advisor type (general, departmental, and faculty advisors), and Fielstein et al. (1992) evaluated satisfaction differences between tradi- tional- and nontraditional-aged students. Furthermore, based on qualitative methods, findings from interviews (Beasley-Fielstein, 1986; 4 NACADA Journal Volume 33(2) 2013