Real-time Lip Tkacking for Virtual Lip Implementation in Virtual Environments and Computer Games zyx Zhang Jian, Mustafa Nazmi Kaynak, Adrian David Cheok, KOChi C h u g National University of Singapore Singapore Abstract zyxwvutsr The use of zyxwvutsr soft computing for virtual object implementa- tion is a novel technique that has received a lot of attention in the last fav years. This field is very useful especially for applications related to computer games and virtual en- vironments. One of the application areas zyxwvuts is the use of real time lip tracking information for the implementation of the virtual lips in animated characters. The increased comput- ing power of computers has made it possible to do real time lip tracking. This real time lip tracking information can be used to implement and control a virtual lip. On the other hand, the use of soft computing to represent the real time lip parameters enables us to have a more robust and flexi- ble system which can compensatefor the potential errors of lip tracking. In this paper we present a system which gen- erates real time lip information for soft computing based virtual animated lips. 1 Introduction Extensive use of the internet and advances in compu- tational power and virtual reality has enabled researchers to develop different virtual reality systems in recent years. Some of these systems have been used to simulate human movements, such as human motion capture [l], virtual char- acter positioning and balance conTrol[2]. On the other hand, different lip tracking algorithms have gee, developed and lip information is used in different areas such as audio- visual speech recognition [3]. But the use of lip tracking for implementing and controlling a virtual lip is a novel ap- plication. In our research, we aimed to use the real time lip tracking information for virtual lip implementation. This new area has many potential applications including computer games and virtual environments. The system for this application has two separate parts, real time lip tracking and virtual lip implementation. The block diagram of the overall system is shown in Fig. 1. In the lip tracking part, first the lip is auto- matically tracked, and then the relevant lip parameters, nec- essary for the representation of the lip, are calculated. After obtaining the lip parameters, these parameters are used to implement and control a virtual lip based on a parametric lip model associated to the features extracted from the lip. zyx Active Cnntour I ip Feature Fu WIlcatlor, Virtual Lip zyxwvu = lm&mentatinn Figure 1. Main System Flowchart In this application, the performance of the virtual lip im- plementation is directly related to the accuracy of the lip tracking. If the lip tracker can accurately track the lip, then the virtual lip implementationwill be more realistic and suc- cessful. In order to have a real time and robust system, the lip feature extraction should be fast, invariant to skin color, head movements, and different lightening conditions. But even if the lip tracking system is robust, still there might be errors in lip tracking due to the low image quality of the web camera and illumination. In order to handle these po- tential problems for virtual lip implementation, we fuzzify the extracted lip parameters. This fuzzification process en- ables the virtual lip implementation to be more flexible and capable of handling the potential errors, because instead of defining a crisp set for the lip features, we give a fuzzified set of lip parameters to the virtual lip model, so that the lip model parameters can be changed in the support of the membership function of each lip parameter and this enables 0-7803-7293-X/01/$17.00 zyxwvutsrqpo 0 2001 IEEE 1359 2001 IEEE International Fuzzy Systems Conference