On the Notion of Semantic Metric Spaces for Object and Aspect Oriented Software Design Epaminondas Kapetanios and Sue Black University of Westminster, London, UK {e.kapetanios, blacksu}@wmin.ac.uk http://cisse.wmin.ac.uk Abstract. Quality assurance via metrics and quality models in con- ceptual modelling and design for software and data or knowledge bases has always been of a major concern for software, systems and database developers. Given the inherent difficulty of suggesting as objective as possible design metrics, this paper discusses a theoretical framework for software design metrics, which conceives parameters as dimensions of a metric space. To this extent, it provides a bridge between techniques for similarity measurement in the field of information retrieval and software design. The introduced metric space is conceived as a vector space, which enables comparisons among proposed software development alternatives. This is illustrated in terms of metric parameters for aspect-oriented soft- ware system design and its impact on the object-oriented counterpart. However, the theoretical framework and discussion could also be consid- ered as a design quality metrics framework for alternative conceptual- izations as applied to object-oriented software design and its persistent storage counterpart via object-oriented databases. 1 Introduction Software measurement has been around now for some forty years [Zus98]. It has been used to gauge software and software engineering quality in terms of the in- herent products, processes and resources. Because we do not always know what it is that causes a project or its software to be of poor quality it is essential that we record and examine trends and characteristics via measurement. Early mea- surement of software focused almost entirely on source code with the simplest measure being lines of code (LOC). In 1983, Basili and Hutchens [BH83] sug- gested that LOC be used as a baseline or benchmark to which all other measures be compared i.e. an effective metric should perform better than LOC so LOC should be used as a “null hypothesis” for empirical evaluation. Much empirical work has shown it to correlate with other metrics [She93] most notably with McCabe’s Cyclomatic Complexity. The earliest code metric based on a coherent model of software complexity was Halstead’s software science []. Early empirical evaluations produced high correlations between predicted and actual results but later work showed a lower correlation. Bowen [Bow78] found only modest correlation, with software science being outperformed by LOC.