A Meta-Analysis of the Effectiveness of Intelligent Tutoring Systems on K–12 Students’ Mathematical Learning Saiying Steenbergen-Hu and Harris Cooper Duke University In this study, we meta-analyzed empirical research of the effectiveness of intelligent tutoring systems (ITS) on K–12 students’ mathematical learning. A total of 26 reports containing 34 independent samples met study inclusion criteria. The reports appeared between 1997 and 2010. The majority of included studies compared the effectiveness of ITS with that of regular classroom instruction. A few studies compared ITS with human tutoring or homework practices. Among the major findings are (a) overall, ITS had no negative and perhaps a small positive effect on K–12 students’ mathematical learning, as indicated by the average effect sizes ranging from g = 0.01 to g = 0.09, and (b) on the basis of the few studies that compared ITS with homework or human tutoring, the effectiveness of ITS appeared to be small to modest. Moderator analyses revealed 2 findings of practical importance. First, the effects of ITS appeared to be greater when the interventions lasted for less than a school year than when they lasted for 1 school year or longer. Second, the effectiveness of ITS for helping students drawn from the general population was greater than for helping low achievers. This finding draws attentions to the issue of whether computerized learning might contribute to the achievement gap between students with different achievement levels and aptitudes. Keywords: intelligent tutoring systems, effectiveness, mathematical learning, meta-analysis, achievement Intelligent tutoring systems (ITS) are computer-assisted learning environments created using computational models developed in the learning sciences, cognitive sciences, mathematics, computa- tional linguistics, artificial intelligence, and other relevant fields. ITS often are self-paced, learner-led, highly adaptive, and interac- tive learning environments operated through computers. ITS are adaptive in that they adjust and respond to learners with tasks or steps to suit learners’ individual characteristics, needs, or pace of learning (Shute & Zapata-Rivera, 2007). ITS have been developed for mathematically grounded aca- demic subjects, such as basic mathematics, algebra, geometry, and statistics (Cognitive Tutor: Anderson, Corbett, Koedinger, & Pel- letier, 1995; Koedinger, Anderson, Hadley, & Mark, 1997; Ritter, Kulikowich, Lei, McGuire, & Morgan, 2007; AnimalWatch: Beal, Arroyo, Cohen, & Woolf, 2010; ALEKS: Doignon & Falmagne, 1999); physics (Andes, Atlas, and Why/Atlas: VanLehn et al., 2002, 2007); and computer science (dialogue-based intelligent tutoring systems: Lane & VanLehn, 2005; ACT Programming Tutor: Corbett, 2001). Some ITS assist with the learning of reading (READ 180: Haslam, White, & Klinge, 2006; iSTART: McNa- mara, Levinstein, & Boonthum, 2004), writing (R-WISE writing tutor: Rowley, Carlson, & Miller, 1998), economics (Smithtown: Shute & Glaser, 1990), and research methods (Research Methods Tutor: Arnott, Hastings, & Allbritton, 2008). There are also ITS for specific skills, such as metacognitive skills (see Aleven, McLaren, & Koedinger, 2006; Conati & VanLehn, 2000). The use of ITS as an educational tool has increased considerably in recent years in U.S. schools. Cognitive Tutor by Carnegie Learning, for example, was used in over 2,600 schools in the United States as of 2010 (What Works Clearinghouse, 2010a). ITS are developed so as to follow the practices of human tutors (Graesser, Conley, & Olney, 2011; Woolf, 2009). They are ex- pected to help students of a range of abilities, interests, and backgrounds. Research suggests that expert human tutors can help students achieve learning gains as large as two sigmas (Bloom, 1984). Although not as effective as what Bloom (1984) found, a recent meta-review by VanLehn (2011) found that human tutoring had a positive impact of d = 0.79 on students’ learning. ITS track students’ subject domain knowledge, learning skills, learning strategies, emotions, or motivation in a process called student modeling at a level of fine-grained detail that human tutors cannot (Graesser et al., 2011). ITS can also be distinguished from computer-based training, computer-assisted instruction (CAI), and e-learning. Specifically, given their enhanced adaptability and power of computerized learning environments, ITS are considered superior to computer-based training and CAI in that ITS allow an infinite number of possible interactions between the systems and the learners (Graesser et al., 2011). VanLehn (2006) described ITS as tutoring systems that have both an outer loop and an inner loop. The outer loop selects learning tasks; it may do so in an adaptive manner (i.e., select different problem sequences for different stu- dents), on the basis of the system’s assessment of each individual student’s strengths and weaknesses with respect to the targeted learning objectives. The inner loop elicits steps within each task (e.g., problem-solving steps) and provides guidance with respect to This article was published Online First September 9, 2013. Saiying Steenbergen-Hu and Harris Cooper, Department of Psychology & Neuroscience, Duke University. Correspondence concerning this article should be addressed to Saiying Steenbergen-Hu, Department of Psychology & Neuroscience, Duke Uni- versity, 417 Chapel Drive, Box 90086, Durham, NC 27708-0086. E-mail: ss346@duke.edu This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Journal of Educational Psychology © 2013 American Psychological Association 2013, Vol. 105, No. 4, 970 –987 0022-0663/13/$12.00 DOI: 10.1037/a0032447 970