Exploring the Visible Human using the VOXEL-MAN framework T. Schiemann, J. Freudenberg, B. Pflesser, A. Pommert, K. Priesmeyer, M. Riemer, R. Schubert, U. Tiede, K.H. Ho ¨hne * Institute of Mathematics and Computer Science in Medicine (IMDM), University Hospital Hamburg-Eppendorf, Pav 70 Martinistrasse 5, 20246 Hamburg, Germany 1 Received 6 December 1999 Abstract In principle the Visible Human data sets are an ideal basis for building electronic atlases. While it is easy to construct such atlases by just offering the possibility of browsing through the 2D slices, constructing realistic 3D models is a huge project. As one rather easy way to establish 3D use, we have registered the Visible Human data to the already existing 3D atlas VOXEL-MAN/brain. This procedure enables one to lookup anatomical detail in an atlas based on radiological images. Concerning the segmentation problem, which is the prerequisite for a real 3D atlas, we have developed an interactive classification method that delivers realistic perspective views of the Visible Human. As these volume based methods require high-end workstations, we finally have developed a multimedia program that runs on standard PCs and uses Quicktime VR movies. 2000 Elsevier Science Ltd. All rights reserved. Keywords: Visible Human data sets; VOXEL-MAN; Electronic anatomy atlases 1. Introduction In our previous work we developed a framework for the generation of volume based 3D interactive atlases. These atlases are based on a two-layer model [1]: the spatial part of the model is realized by image volumes, which are usually obtained from radiology, and congruent label volumes for different domains of knowledge like morphol- ogy or functional anatomy. These volumes are linked to a semantic network containing descriptive knowledge about the objects. For an extraction of the model’s contents there is a large set of visualization, exploration, and simulation tools. With this framework the VOXEL-MAN atlases of brain and skull have been completed [2] and atlases of other parts of the body are under development [3]. Their pictorials are based on radiological cross-sectional images. While this has the advantage that anatomy can be well linked to radiologi- cal imaging, anatomical detail is subject to improvement. The high resolution data sets of the Visible Human project [4] are therefore an ideal basis for this purpose [5]. This paper describes three approaches for using the Visi- ble Human within VOXEL-MAN: 1. The Visible Human slices are registered with the existing atlas VOXEL-MAN/brain, which is based on MRI. Hence, both stacks of slices can be browsed through in the same context. 2. The Visible Human is segmented for the creation of new atlases directly based on the color volume data. Images of high quality and greatly improved realism can therefore be computed. Due to the high amount of data these meth- ods require high-end workstations. 3. A multimedia program for PCs is developed that is based on Quicktime VR movies with additional intelligent layers. This method transfers many of the exploration tools from the high-end program to small computers realizing “quasi-virtual reality”. While the first approach is independent of the others, the third method requires the second as an authoring tool for the computation of intelligent movies. Computerized Medical Imaging and Graphics 24 (2000) 127–132 PERGAMON Computerized Medical Imaging and Graphics 0895-6111/00/$ - see front matter 2000 Elsevier Science Ltd. All rights reserved. PII: S0895-6111(00)00013-6 www.elsevier.com/locate/compmedimag * Corresponding author. Tel.: + 49-40-42803-3652; fax: + 49-40- 42803-4882. E-mail addresses: schiemann@uke.uni-hamburg.de (T. Schiemann), freudenberg@uke.uni-hamburg.de (J. Freudenberg), pflesser@uke.uni- hamburg.de (B. Pflesser), pommert@uke.uni-hamburg.de (A. Pommert), priesmeyer@uke.uni-hamburg.de (K. Priesmeyer), riemer@uke.uni- hamburg.de (M. Riemer), schubert@uke.uni-hamburg.de (R. Schubert), tiede@uke.uni-hamburg.de (U. Tiede), hoehne@uke.uni-hamburg.de (K.H. Ho ¨hne). 1 http://www.uni-hamburg.de/idv