Natural Language Engineering 10 (1): 25–55. c 2004 Cambridge University Press DOI: 10.1017/S1351324903003206 Printed in the United Kingdom 25 Evaluation of text coherence for electronic essay scoring systems E. MILTSAKAKI University of Pennsylvania, Philadelphia, PA 19104, USA K. KUKICH † Educational Testing Service, Princeton, NJ 08541, USA (Received 12 October 2001; revised 6 December 2002 ) Abstract Existing software systems for automated essay scoring can provide NLP researchers with opportunities to test certain theoretical hypotheses, including some derived from Centering Theory. In this study we employ the Educational Testing Service’s e-rater essay scoring system to examine whether local discourse coherence, as defined by a measure of Centering Theory’s Rough-Shift transitions, might be a significant contributor to the evaluation of essays. Rough- Shifts within students’ paragraphs often occur when topics are short-lived and unconnected, and are therefore indicative of poor topic development. We show that adding the Rough- Shift based metric to the system improves its performance significantly, better approximating human scores and providing the capability of valuable instructional feedback to the student. These results indicate that Rough-Shifts do indeed capture a source of incoherence, one that has not been closely examined in the Centering literature. They not only justify Rough-Shifts as a valid transition type, but they also support the original formulation of Centering as a measure of discourse continuity even in pronominal-free text. Finally, our study design, which used a combination of automated and manual NLP techniques, highlights specific areas of NLP research and development needed for engineering practical applications. 1 Introduction The task of evaluating student’s writing ability has traditionally been a labor- intensive human endeavor. However, several different software systems, e.g. PEG (Page and Peterson 1995), Intelligent Essay Assessor 1 and e-rater 2 are now being used to perform this task fully automatically. Furthermore, by at least one measure, these software systems evaluate student essays with the same degree of accuracy as human experts. That is, computer-generated scores tend to match human expert scores as frequently as two human scores match each other (Burstein, Kukich, Wolff, Chodorow, Braden-Harder, Harris and Lu 1998). † Current address: National Science Foundation, 4201 Wilson Blvd., Arlington, VA 22230, USA 1 http://lsa.colorado.edu. 2 http://www.ets.org/research/erater.html