Interaction with Medical Volume Data on a Projection Workbench Ching-yao Lin 1 , R. Bowen Loftin 2 , Ioannis A. Kakadiaris 1 , David T. Chen 1 , Simon Su 1 1 Department of Computer Science 2 Virginia Modeling Analysis & Simulation Center University of Houston Old Dominion University Houston, Texas 77204 Suffolk, Virginia 23435 {chingyao, ioannisk, ssu}@cs.uh.edu, bloftin@odu.edu, dave@chen.net Abstract Interaction with volume data has often been difficult due to the large memory and processing power required. By taking advantage of current high-end graphics hardware, a volumetric virtual environment has been developed, which allows a user to interact with a volumetric visible human data set. The application enables the user to explore the interior of a virtual human body in a natural and intuitive way. 1. Introduction In traditional data visualization, researchers visualize data on a two-dimensional screen and use a mouse and a keyboard to interact with the data. Recently, virtual reality (VR) techniques allow users to manipulate data naturally and intuitively in real-time. VR techniques have been applied to many areas of scientific visualization as well as training, one of the major areas is medicine. VR provides an intuitive way to visualize complex medical data, and can be used for medical education, surgery planning and training. In VR applications, the data are displayed in stereo mode, which allows users to better understand the spatial relationships between objects in the environment. In addition, the new generation of hardware allows interactive rates of interaction. Traditional medical VR applications rendered scenes via surface graphics. Users build the anatomical models through modeling software or extract the surface information from volumetric data such as the ones obtained from magnetic resonance imaging (MRI), or computer tomography (CT) volumes through methods like the Marching Cube algorithm [1]. However, the heterogeneous inner structure of the human body cannot be displayed through surface graphics. Traditionally users studied their MRI or CT data as series of parallel slices, although the data are by their nature volumetric. The interior details of the human body can be presented using volume graphics. In the past, interaction in real-time with volume data was not practical due to the extensive computational power required. With the advent of fast graphics acceleration hardware, we are now able to create an interactive volumetric virtual environment. Three-dimensional interaction is a more natural and intuitive way to manipulate data. People can “feel” the position and movement of their hands without looking at them. To perform a task, a user’s perceptual system needs something to refer to, something to experience. Three-dimensional interaction uses a spatial reference to provide the perceptual experience [2]. Therefore, compared to a traditional keyboard and mouse interface, three-dimensional interaction provides an easier way to locate targets in a three-dimensional environment. For example, if we wish to select a clipping plane at an arbitrary angle in a three- dimensional environment, we can place the clipping plane at the desired location easily by just moving our hand. On the contrary when we use a mouse plus a keyboard, we have to adjust the plane’s orientation in a slow and cumbersome manner. Our goal is to develop an application with the ability to visualize volumetric medical data [2] in a virtual environment at interactive rates, which will allow users to explore the interior of the human body. The application is intended for use in surgical training and planning. 2. Related Work 2.1 Projects based on the Visible Human Data Set The Visible Human Project¥ [3] is a long-range plan of the National Library of Medicine (NLM) to provide data that would serve as a common reference point for the study of human anatomy [3]. NLM has created a complete, anatomically detailed three-dimensional representation of the normal male and female body. The data were obtained using CT, MRI, and digitized photographic images from cryosection. The male was sectioned at 1-millimeter intervals while the female at one-third of a millimeter intervals [4]. There are many applications and products built on the Visible Human data set [5]. Most of those applications are rendering two-dimensional images directly or reconstruct new cross-section images from the original data set [6, 7].