Shape Retaining Chain Linked Model for Real-time Volume Haptic Rendering Jinah Park KAIST jinah @ kaist.ac.kr Sang-Youn Kim KAIST museful@kaist.ac.kr Seung-Woo Son KAIST sons@kaist.ac.kr Dong-Soo Kwon KAIST kwonds@mail.kaist.ac.kr jinah@icu.ac.kr Abstract Haptic rendering is the process of computing and generating forces in response to user interactions with virtual objects. While we speak of real-time volume rendering for visualization, we are still very much limited to surface models for manipulation due to overwhelming computational requirements for volumetric models. In this paper, we propose a new volumetric deformable model that is suitable for volume haptic interactions. The volume elements of our proposed model are linked to their nearest neighbors and their displacements are transformed into potential energy of the virtual object. The original 3D ChainMail algorithm does not account the fact that the residual energy left in the object after some interactions becomes a critical problem in haptic rendering. We present the shape-retaining chain linked model, which allows for fast and realistic deformation of elastic objects. Furthermore, we incorporate force–voltage analogy (duality) concepts into the proposed shape-retaining chain linked representation in order to develop a fast volumetric haptic model that is suitable for real- time applications. We experimented with homogenous and non- homogenous virtual objects of size 75x75x75 volume elements, and we were able to verify real-time and realistic haptic interaction with a 3DOF PHANToM TM haptic device. Keywords: volume deformation and manipulation, voxel- based representation, virtual reality, haptic model 1 Introduction Nowadays, three-dimensional volumetric data sets of human bodies are easily obtainable from medical image scanners. In order to realize the volumetric data sets in a virtual environment, we need not only visualization but also manipulation schemes. For visualization, there are two different methods that are usually considered for medical imaging: surface rendering and volume rendering. Surface rendering requires a preprocessing step to extract surfaces of objects in which one is interested. Once this task is done, the rendering itself is fast enough for an interactive viewing on a common graphics platform. Even if the rendered image is crisp, it may still lack important details and does not reflect the internal data. On the other hand, volume rendered images convey more information, accounting for all volumetric data at the cost of computation time. Until the late 1990’s, due to limitations in computing power, surface rendering has been popular. The recent evolution of computer graphic technology and computer hardware, however, has made it possible for real- time volume rendering of volumetric data sets. PREPRINT TO APPEAR IN THE PROCEEDINGS OF IEEE/SIGGRAPH Symposium on Volume Visualization and Graphics, Boston, MA, October 2002. === NOTE: UPDATED EMAIL ADDRESS FOR J Park For manipulation of virtual objects, we first need to define a representation or a model, which unifies a portion of data as an entity. We refer manipulation as an act which causes a subset or the entire set of unified data to move spatially. If the entity is a soft object, manipulation of the entity will cause the original shape of the object to be deformed. Although the appearance of the shape is only at the surface level, it is desirable that the deformed shape reflects the underlying internal composition (i.e., the model should be at the volumetric level). There are two types deformable models: Mass-Spring models and models based on Finite Element Method (FEM). Unfortunately, in practice, the physically motivated deformable models are very much limited to surface modeling mainly due to overwhelming computational requirements. For simulation of interaction with deformable objects, it has been recognized that haptic rendering [1,2] is as important as graphic rendering in virtual reality. But in a haptic simulation of interaction with a deformable object, for example in a surgical simulator, it is even harder to meet the real-time constraint because a virtual object model requires the haptic update rate of 1kHz (as opposed to the graphics update rate of 30Hz). Hence, an alternative modeling method is needed in order to meet the computational requirements for haptic applications. Gibson [3,4] has proposed voxel-based representation for volume deformation, and has suggested the 3D ChainMail algorithm, which overcomes the computational burden by rapidly propagating deformation outwards through a volume. The deformation computation is done very locally with comparisons only between two neighbor nodes, and does not involve any heavy matrix inversion. Therefore, the algorithm makes possible fast deformation of objects containing thousands of volumetric elements. Although the elegantly simple nature of the algorithm seems to be so effective that it may be successfully applied to haptic rendering where computational efficiency is most crucial, haptic models based on 3D ChainMail representation have yet to be presented [5]. This paper presents a new haptic model based on such representation. We have developed a real-time volume haptic rendering technique based on modified 3D ChainMail. The behavior of the original 3D ChainMail [3] algorithm is more like that of ‘play- dough’ so that it is hard to reshape the deformed model back to its original state. The residual energy left in the object after some interactions becomes a critical problem in haptic rendering. We propose a new algorithm, shape-retaining 3D ChainMail, to resolve the problem. This new algorithm is still as fast as the original 3D ChainMail, and yet displays more realistic deformation of elastic materials than the 3D ChainMail does. We call our representation as shape-retaining chain linked model, or in short S-chain model. Furthermore, we introduce force-voltage analogy concepts in order to compute the reflected forces to be used in volume haptic rendering. In our representation, the reflected force from the deformed object is proportional to the sum of the distances of all dislocated volume elements. This haptic rendering method has been implemented using a PHANToM TM haptic interface, and its performance with 75x75x75 volumetric data sets is shown to be stable and realistic.