THE DEVELOPMENT OF KNOWLEDGE TESTS WHEN STUDYING ACADEMIC TEXTS IN HIGHER EDUCATION E. De Bruyne, K. Aesaert, M. Valcke Ghent University (BELGIUM) Abstract In order to measure students’ learning performance, research on higher education mainly relies on the use of knowledge mastery tests. Although these tests are widely being used, the psychometric quality of these measurements has been given little attention. The aim of this study is to outline the development and validation of two knowledge mastery tests, serving as indicators for learning performance, through the use of item response theory. As studying academic texts is a major source for learning domain-specific content in higher education, both tests focus on knowledge mastery after studying an academic text on, respectively “problem solving theory” and “metacognition”. Both tests were administered to first-year university students (n=180 for problem solving; n=249 for metacognition) enrolled for a Bachelor programme in Educational Sciences. The items were controlled for dimensionality, model-data fit, local item dependence and monotonicity. The final measure consisted of 16 out of 30 items for the knowledge test about the problem solving text. For the knowledge test about metacognition, 21 out of 30 items remained relevant and informative. The final results indicate that both knowledge tests are reliable, in particular for median ability levels. This study highlights the importance of developing knowledge mastery tests that provide accurate and valid measures for learning from academic tests. Keywords: Higher Education, reading for academic purposes, Item Response Theory, learning performance, test validation. 1 INTRODUCTION In higher education, academic reading is indispensable for information processing in view of academic achievement. Students’ mastery of domain-specific content – resulting from academic reading – often serves as an indicator for learning performance. In order to measure students’ learning performance, research relies on the use of knowledge mastery tests. The literature is sparse on reporting the psychometric validation and reliability procedures of knowledge tests [1]. Aim of the present study is to develop an adequate measure of students’ acquired knowledge from reading academic texts. The design, development and item response theory (IRT) analysis of knowledge mastery tests completed by first-year university students after studying academic texts are discussed after the theoretical background. 2 BACKGROUND 2.1 Academic reading in higher education Higher education students generally read domain-specific academic texts as learning materials. Academic literacy is crucial for students’ scientifically thinking, expertise, independence in learning science, and ability to use scientific knowledge in problem solving [2]. Amongst other, academic reading materials include extracts from course books, chapters in textbooks, reports, and research articles. In the present study, we focus on research articles that are published in international peer- reviewed journals as sources of knowledge (hereafter called “academic texts”). Comprehending text arises from the reader’s ability to connect the meaning of multiple sentences into a coherently mental representation of the overall meaning of text [3]. Mental representations as products of comprehension processes contain multiple levels of meaning. Readers develop two classes of mental models, or representations of meaning of text ideas: a text-based model, which is a mental representation of the propositions of the text; and a situation model in which readers integrate information from the text with their prior knowledge [4]. Reading comprehension as meaningful learning for understanding is required to learn from academic texts. Reading comprehension is in the present study approached as content-area comprehension [5]. Content-area reading comprehension Proceedings of EDULEARN17 Conference 3rd-5th July 2017, Barcelona, Spain ISBN: 978-84-697-3777-4 10494