2nd International Symposium on Sustainable Development, June 89, 2010 Sarajevo 74 An InterRater Perspective for the Researches on Assessing Writing Turgay HAN Kafkas University, Faculty of Science and Letters, Department of English Language and Literature, Kars, Turkey turgayhan@yahoo.com.tr Hüseyin EFE Atatürk University, Faculty of Letters, Department of English Language and Literature, Erzurum, Turkey hefe@atauni.edu.tr Erdinç Parlak Atatürk University, Kazım Karabekir Faculty of Education, Department of English Language Education, Erzurum, Turkey erdincparlak@hotmail.com Abstract:In assessing EFL students’ writings consistently, there are various factors that must be taken into consideration while rating. Especially those who want to make quantitative researches using raters or who aim to give suitable feedback to written productions should be responsible for fulfilling the requirements in marking and scoring process. In this context, this paper looks insight for some issues related to the raters, such as inter:rater reliability, analytic or holistic examinations, rating criteria, and others. This study includes 8 native raters and 8 non:native raters, each of whom rated an ESL essay both holistically and analytically. Every participant rater’s background of scoring ESL writings was similar. The result showed that there was no significant difference between raters, that is, participants’ grading of the essay is irrespective of their being native speakers. At the end of the study, some important implications for essay rating practices and both the researchers and language teacher were emphasized. Key Words: Rater, Reliability, Assessing Writing, EFL Introduction Many researches based on assessing writing and error treatment incline some variables with regard to the scoring each ESL writing accurately. In the process of deciding the effects of any types of feedback given to the EFL students’ writings, there are some sorts of musts that researchers fulfil in the course of doing their statistical analyses related to the data obtained by their scorings methods. Even though several factors influence scoring and the process of raters’ decision makings, researchers in the field of ESL essay rating delve into varying issues such as task requirement, rater characteristics and essay characteristics (Barkaoui, 2010; p.54). Assessing L2 writing accurately is very important for the validity of the inferences. Therefore, essays judged by more than one examiner will be closer to the fair score than judgement made by only one rater (Hamp:Lyons, 1990; p.79). Writing assessment process involves a multi:dimensional evaluation; for this reason, clarity, coherence and grammatical quality are some of the core points to be assessed for a writing paper. In this context, inter:rater reliability, one of the components of writing assessment process, is considerably the critical issue in scoring EFL/ESL writings as there are a few interfaces that raters are prone to experience such as the idiosyncratic, rating methods, and criteria during assessment. Since it is a subjective phenomenon, the decisions made through the scores given have some potentialities that effect overall research. In this research, to assure objectivity in scoring, two types of raters were chosen. As Stemler (2004) emphasizes, “Raters are often used when student products or performances cannot be scored objectively as right or wrong but require a rating degree. The use of raters results in the subjectivity that comes hand in hand with an interpretation of the product or performances (cited in Bresciani, Oakleaf, et al, 2009; p.3)”.