ADDING TAGS TO COURSES TO IMPROVE EVALUATION A multiplatform LCMS approach that allows multidimensional analysis Eduard Cespedes-Borras, Aitor Rodriguez, Jordi Carrabina, Javier Serrano Centre de Prototips I Solucions HW/SW. Universitat Autonoma de Barcelona. ETSE. E-08193 Bellaterra.Spain ecespedes@IEEE.org, aitor.rodriguez@uab.cat, Jordi.Carrabina@uab.cat, javier.serrano@uab.cat Keywords: Tag, Bloom’s Taxonomy, Educational Objectives, metadata, Statistical Analysis, E-learning, SCORM, QTI, Learning Design, Authoring Tool, LMS, Assessment Analysis Model. Abstract: The main idea of this paper consists in adding tags to the contents available in any given course materials structured according to any Learning Content Management System (LCMS). Tags, very popular in web 2.0 applications, give a free and flexible way of characterizing materials according to any criteria that a teacher can imagine. Therefore, one can use them for any specific analysis and clustering of both teaching methodology and students learning. Our approach claims to be platform independent in the sense that can be applied on top of any current LCMS. To achieve that property, we define a XML specification that includes specific, platform dependent, queries. This choice is much more efficient than building plug-ins or hardcoded solutions for any existing learning platform (and its underlying data-base). At the end of the paper, we show the powerfulness of this approach with a course example. 1 INTRODUCTION A large number of specifications have been generated in order to standardize some aspects of the e-learning process, and also a large variety of proposals which standardize the educative contents. Successful examples are Sharable Content Object Reference Model (SCORM) (ADL, 2008) or some of IMS (IMS Content Package, IMS Simple Sequencing, etc) (IMS, 2001). There are also other specifications, like Learning Design (LD) (IMS LD, 2003), that manage the eLearning sequencing process. LAMS (LAMS, 2002) is the main tool for creating this type of contents. However, its evaluation depends on the LCMS like Moodle (Moodle, 2008) or Sakai (Sakai Project, 2003), in which LAMS could be included. Concerning SCORM, its evaluation capabilities depend also on the LCMS that will manage it. Besides, there are systems based on the Question and Test Interoperability (QTI) (C. Smythe, 2002) standard that focuses on the questions and their evaluations. By this last, it is easy to find analytical and statistical tools that allow us to increase the evaluation performance. Generally, there is a lack of standardized reporting systems that could be used to achieve conclusions on the efficiency of the teaching and learning processes. This paper proposes two new features: (1) a new specification (with the corresponding tool) to tag e- learning structures and (2) a methodology to efficiently connect our tool to any eLearning System. In this way, we pretend to improve the management of the learning evaluation, and at the same time, give flexibility to the evaluators to add any criteria, that can later produce a high variety of results using multidimensional analysis tools. 2 E-LEARNING STANDARDS ANALISIS There exist different e-Learning standards that emphasize different learning aspects. Following, we will briefly review them. 2.1 SCORM Introduction The Sharable Content Object Reference Model (SCORM) is an aggregated specification for asynchronous distance learning, organized by the Advanced Distributed Learning Initiative (ADL).