QuOnt: an ontology for the reuse of quality criteria Remco C. de Boer and Hans van Vliet Department of Computer Science VU University Amsterdam the Netherlands {remco,hans}@cs.vu.nl Abstract Software product audits are knowledge-intensive tasks in which architectural knowledge plays a pivotal role. In the input stage of a software product audit, quality crite- ria are selected to which the software product should con- form. These quality criteria resemble architectural tactics and can be viewed as a definition of the Soll-architecture of the product. Like tactics, the same quality criteria can be applied to different software products. However, there are currently no models that support the codification of quality criteria as reusable assets. In this work, we present an on- tology that supports the reuse of quality criteria in the input stage of software product audits. 1. Introduction On occasion, organizations may experience the need to verify the quality of a software product. Such a need may arise, for example, prior to acquisition or in the case of contracted-out development. A way to assess the quality of a software product is to let an independent party perform an audit. Software product audits are knowledge-intensive tasks in which architectural knowledge plays a pivotal role. For example, architectural design decisions and their rationale provide insight into the trade-offs that were considered, the forces that influenced the decisions, and the constraints that were in place. Knowledge work, according to Mackenzie Owen, incor- porates “the gathering, processing, creating, sharing and disseminating of knowledge” and consists of three distinct stages: input, throughput, and output [13]. In each of these stages, knowledge is employed: in the input stage, relevant existing knowledge and data are gathered; in the through- put stage, knowledge and data are analyzed and processed; and in the output stage, the results of the previous stage are recorded and disseminated. A typical software product audit consists of the follow- ing activities: • Input stage: gather quality attributes, quality criteria, and product artifacts. • Throughput stage: compare the product artifacts with the desired level of quality, expressed in terms of qual- ity attributes and quality criteria. • Output stage: lay down any findings regarding devia- tions from the desired level of quality in a report. We can distinguish three stakeholders in a software prod- uct audit: the customer (i.e., the party who requested the au- dit), the supplier (i.e., the party who developed the software product), and the auditor (i.e., the party who independently assesses whether the supplier’s software product conforms to the customer’s needs). In the input stage, the product artifacts are obtained di- rectly from the supplier. The important quality attributes are usually determined in a workshop with the customer. The result from such a workshop could for example be that security, maintainability, and usability are the three most important quality attributes to a customer, and that of those three security has the highest and usability the lowest prior- ity. From the quality attributes and their priorities, quality criteria are derived that the product should satisfy. Unlike quality attributes, which are still a fairly abstract represen- tation of ‘quality’, quality criteria represent concrete mea- sures that may or may not be found in the software product. For example, in a system where security is important proper user authentication would probably be a quality criterion; without such authentication, the necessary level of security is unlikely to be reached. Although the derivation of quality criteria from quality attributes again involves deliberation with the customer, it requires a fair amount of technical background knowledge. Hence, auditors generally have the lead in deciding upon which quality criteria to use. SHARK’09, May 16, 2009, Vancouver, Canada 978-1-4244-3726-9/09/$25.00 2009 IEEE ICSE’09 Workshop 57