Extending Question & Test Learning Technology Specifications with Enhanced Questionnaire Models Elena García Miguel-Ángel Sicilia José-Ramón Hilera José-Antonio Gutiérrez Department of Computer Science Department of Computer Science Department of Computer Science Department of Computer Science University of Alcalá Carlos III University University of Alcalá University of Alcalá Ctra. Barcelona km.33.600 28871 Alcalá de Henares (Madrid) Avd. Universidad 30, 28891 Leganés (Madrid) Ctra. Barcelona km. 33.600, 28871 Alcalá de Henares (Madrid) Ctra. Barcelona km. 33.600, 28871 Alcalá de Henares (Madrid) Spain Spain Spain Spain elena.garciab@uah.es msicilia@inf.uc3m.es jose.hilera@uah.es jantonio.gutierrez@uah.es Abstract – Questionnaires are a commonly used instrument for diverse purposes in the context of educational technology. Applications of questionnaires range from student’s assessments to evaluations of teaching, and include also the evaluation of the learning contents, and even of the technology that delivers them. Although the IMS QTI specification addresses the interchange of questionnaires and their results, the scope of its information model is primarily oriented towards conventional student’s knowledge or ability evaluation. In consequence, it requires extensions to represent some information elements needed for other uses, and additions are also needed to describe certain item characteristics that are used in adaptive testing. In this paper, an abstract model called QM is described, which is intended to provide the foundation for a more comprehensive questionnaire information model. Extensions of the IMS QTI XML data structures are sketched to show how QM can enrich existing specifications with extended semantics for a wide range of applications. I. INTRODUCTION Questionnaires are currently an important and frequently used element in educational technology contexts, since they are commonly applied as an instrument to achieve a number of diverse objectives. These objectives include the formative or summative assessments of student’s knowledge [13], the estimation of certain student’s cognitive abilities [12], attitude measurements of users of learning technology [14], and evaluations of teaching [11]. They have been used also for the task of evaluating the educational technologies [10] that support the learning processes. As such important instruments, questionnaires have received attention from standardization efforts in the area of learning technology. More specifically, the Question & Test Interoperability (QTI) Working Group of the IMS Global Learning Consortium 1 is committed to address the need for a common interchange electronic format for questions and tests. As a result, the “IMS Question & Test Interoperability specification” (currently in its final 1.2 version, see [1] and its related documents) specifies an information model for the representation of assessment data, including questions, tests and their results. More specifically, the technical structure of the QTI specification is based upon two independent components: the ASI (Assessment, Section, Item) component, used to describe the evaluation objects [15], and the ‘result reporting 1 <http://www.imsproject.org> objects’ [16], used to contain the results of the evaluation (we focus here only in the ASI component, which describes the essential questionnaire’s information structures). Nonetheless, the QTI model could be enriched with some additional features in order to enhance some important aspects of questionnaire’s structure regarding their use in educational contexts (as is described in [9]), mainly in two related aspects: scope and level of detail. On the one hand, there are different uses of questionnaires in educational settings that simply fall out of the explicit scope of the QTI specification, being some of the most relevant the encoding of specific aspects of attitude- gathering questionnaires (useful in usability, learning content quality and instructor evaluation) or even simple information-gathering questionnaires. As a matter of fact, the QTI defines assessment in the following, somewhat vague, terms: “An Assessment is equivalent to a ‘Test’. It contains the collection of Items that are used to determine the level of mastery, or otherwise, that a participant has on a particular subject” [15]. On the other hand, the QTI lacks explicit support for some important meta-information about questionnaires, like internal characteristics as reliability and validity, and other important item measures needed to build item banks (difficulty and the like). Although all the IMS specifications are prepared for extension, some meta-attributes can be considered important enough to include them explicitly in the information models, and as so, they should be considered for addition – as optional elements – if a more detailed interchange format would be required or desired. In addition, the QTI model doesn’t allow for a clear separation between the questions or items and how they should be presented to the users. In this paper, we describe the essential components of an abstract model for questionnaires, that we have called QM (standing for questionnaire model), aimed at the general representation of questionnaires and question banks of diverse kinds, taking into account all the specific aspects about the topics described in [9]. We also briefly describe how the QTI specification can be extended, both to broaden its scope and to enrich the information it deals with, in order to demonstrate how QM could be used in concrete educational technology systems for a wide range of purposes. The rest of this paper is structured as follows. Section II describes the main QM components along with the rationale for their inclusion, and Section III outlines how