A User Interface for the Construction of 3D Active Shape Models A. D. Brett, M. F. Wilkins and C. J. Taylor Department of Medical Biophysics, Stopford Building, Oxford Road, University of Manchester, Manchester M13 9PT, UK adb,mfw,ctaylor @sv1.smb.man.ac.uk Abstract. We present a graphical user interface which facilitates the definition of anatomical boundaries and landmarks for automated model-based segmentation and analysis using 3D Active Shape Models (ASMs). ASMs provide a statistically-based approach to automatic segmentation which is completely general in terms of both imaging modalities and the anatomical objects of interest. Further, the ASM guarantees an anatomically plausible result by incorporating a priori knowledge of the objects of interest. This approach is therefore well suited to the interpretation of medical images. 1 Introduction The segmentation of 3D diagnostic images is important in quantifying and visualising three-dimensional (3D) anatomical structures. Purely manual segmentation techniques involving the delineation of structural boundaries are both time-consuming and prone to subjective errors. Data-driven segmentation techniques usually require at least some user interaction when anatomical knowledge or the user requirement overrides what is present in the image. This is particularly true of imaging modalities in which the data is noisy and structural boundaries are incomplete. We attempt to overcome these problems by using a model-based segmentation technique. This model-based approach uses a kind of flexible template which is allowed to deform to fit grey-scale evidence found in the image. A variety of flexible template methods have been described for use in model-based image segmentation. Tem- plates may be built from sets of primitives such as circles, lines or arcs, each of which has some degree of freedom to move relative to the others. However, these models are not general - a new template and fitting scheme must be produced for each application. Models have also been described which are based upon flexible contours such as the ‘snakes’ of Kass et al [7]. These flexible contours are energy minimising spline curves which have associated stiffness and elasticity and are attracted towards grey-level image evidence such as edges. The drawback of such models is that are free to take almost any smooth shape and are therefore non-specific. That is, they can produce examples of the object of interest which are outside the normal variation of shape for that object. To make a model specific, the variation in shape that the model may describe must be constrained by incorporating a priori knowl- edge of the object to be segmented. This problem has lead to the development of Active Shape Models (ASMs) [1] which are flexible templates incorporating shape constraints. The work described here is an attempt to solve the problems of building 3D ASMs with arbitrary topologies in an intuitive and flexible way. The goal is to produce a system which may be used routinely by non-technical users. 2 Active Shape Models ASMs combine explicit models of shape and grey-level appearance of a given class of objects. These models are built using a training set of images which contain examples of the objects. The shape of a set of structures of interest may be described by a labelled set of closely spaced landmark points. At present, these landmark are manually labelled, but may this be done automatically once a boundary has been defined [5]. The shape variation of a class of objects may be described by a Point Distribution Model (PDM), which is generated by performing principal component analysis on the variation in position of these landmark points over all the training examples. A PDM describes shape variation as a mean shape and a set of linearly independent modes which represent the main ways in which the shape of the training examples varied. New shape examples, , are generated by a linear superposition of the mean mean shape, and a weighted subset of these modes: (1) where is a matrix describing a subset of modes, and is a vector of weights controlling the influence of each mode. The grey-level appearance of the objects across a training set is modelled by examining the pixel values in small patches around the landmark points. Again, a statistical analysis of these patches produces an appearance