MACHINE-LEARNED CONTOURS TO ASSIST BOUNDARY TRACING TASKS Stewart Crawford-Hines & Charles Anderson Department of Computer Science Colorado State University, Fort Collins, CO 80523 {sgcraw,anderson}@cs.colostate.edu Published in the Proceedings of the 1998 IEEE Southwest Symposium on Image Analysis and Interpretation , Tucson AZ, pp. 229-231, April 1998. ABSTRACT Our focus is to assist interactively in the initial segmentation of medical imagery. In near- real-time, from an initial set of pixels traced, our system learns the characteristics of a contour being traced and projects ahead the trace. This paper provides an overview of our approach, presents promising results, and outlines our research directions. I. INTRODUCTION Despite the research in edge detection over the past decades, the current state of the practice for delineating regions of interest in medical imagery is an expert's manual outlining of the region. Current techniques often yield too many, unconnected edges, which then still require manual filtering, and this only becomes worse in the presence of noise and texture. Johnson, et.al.,[3] note: “Although image segmentation and contour/edge detections have been investigated for quite a long time, there is still no algorithm that can automatically find region boundaries perfectly from clinically obtained medical images. There are two reasons for this. One is that most of the image segmentation algorithms are still noise sensitive. The second reason is that most segmentation tasks require certain background knowledge about the region(s) of interest.” We are working on a new approach, to provide real-time learning and trace-ahead capabilities to assist experts in these tasks. The combination we set forward capitalizes on what each does best: a human expert provides global perspective and context, and a software system quickly analyzes Figure 1 : Enlargement: Network-traced path through the grey-scale landscape. and works through similar local neighborhoods. II. LEARNING & PROJECTING A TRACE A model of our interaction scenario is illustrated on an enlarged set of pixels, shown in Figure 1. The darkest pixels represent a contour of 120 pixels. The first 20 pixels on the left were traced manually, moving a cursor over the image.