Book Title Book Editors IOS Press, 2003 1 Evaluation of OntoLearn, a methodology for automatic learning of domain ontologies Paola VELARDI a , Roberto NAVIGLI a , Alessandro CUCCHIARELLI b and Francesca NERI b a Dipartimento di Informatica - Università di Roma “La Sapienza”- Roma, Italy b DIIGA - Università Politecnica delle Marche - Ancona, Italy Abstract. Ontology evaluation is a critical task, even more so when the ontology is the output of an automatic system, rather than the result of a conceptualization effort produced by a team of domain specialists and knowledge engineers. This pa- per provides an evaluation of the OntoLearn ontology learning system. The pro- posed evaluation strategy is twofold: first, we provide a detailed quantitative anal- ysis of the ontology learning algorithms. Second, we automatically generate nat- ural language descriptions of formal concept specifications in order to facilitate per-concept qualitative analysis by domain specialists. Keywords. Ontology learning, natural language processing 1. Introduction Ontologies play an important role in the so-called Semantic Web project [1]. Their aim is to capture domain knowledge in a particular area of interest, favoring interoperability and providing a shared understanding among the involved players of web-based applica- tions (e.g. web services, resource sharing among enterprises, and in general, web infor- mation access). In recent years, research related to ontology development produced tan- gible results concerning the definition of language standards [2] and increasingly power- ful ontology editing and management tools [3][4]. Despite the availability of these tools, populating domain ontologies with a sufficiently large number of concepts is a tedious and time-consuming process, preventing wide-scale production and usage of ontologies by industrial institutions. Automatic methods for ontology learning and population have been proposed in recent literature (e.g. ECAI-2002 [5], KCAP-2003 [6] workshops, and [7]), but a co-related issue then becomes the evaluation of such automatically generated ontologies, not only to the end of comparing the different approaches, but also to verify whether an automatic process may actually compete with the typically human process of converging on an agreed conceptualization of a given domain. Ontology construc- tion, apart from the technical aspects of a knowledge representation task (i.e. choice of representation languages, consistency and correctness with respect to axioms, etc.), is a consensus building process, one that implies long and often tedious discussions among the specialists of any one given domain. Can an automatic method simulate this process?