International Journal of Signal Processing, Image Processing and Pattern Recognition Vol.8, No.9 (2015), pp.219-228 http://dx.doi.org/10.14257/ijsip.2015.8.9.23 ISSN: 2005-4254 IJSIP Copyright ⓒ 2015 SERSC Face and Gesture Based Human Computer Interaction Yo-Jen Tu 1 , Chung-Chieh Kao 1 , Huei-Yung Lin 1 and Chin-Chen Chang 2 1 Department of Electrical Engineering and Advanced Institute of Manufacturing with High-tech Innovations National Chung Cheng University, Chiayi 621, Taiwan 2 Department of Computer Science and Information Engineering National United University, Miaoli 360, Taiwan E-mail: ccchang@nuu.edu.tw (corresponding author) Abstract In this paper, we present a face and gesture based human computer interaction (HCI) system. We combine head pose and hand gesture to control the system. We can identify the positions of the eyes and mouth, and use the face center to estimate the pose of the head. Moreover, we introduce a technique for automatic gesture area segmentation and orientation normalization of the hand gesture. The user does not need to keep gestures in upright position and the system segments and normalizes the gestures automatically. The experimental results show that the proposed approach is accurate with gesture recognition rate of 93.6%. Also, the user can control multiple devices, including robots simultaneously through a wireless network. Keywords: Human computer interaction, Skin color, Face detection, Gesture recognition 1. Introduction Human computer interaction (HCI) [1, 2, 11, 14, 18] relies on multiple modalities such as speech, faces or gestures. Faces and gestures are one of the main nonverbal communication mechanisms between humans and computers. Therefore, a real-time processing of faces and gestures is important for HCI. Moreover, in recent years the field of computer vision has been progressed rapidly and the efforts have been made to apply research results in the real-world scenarios. When applying research findings, hardware cost becomes an important issue. The HCI system can be used towards robot tour guidance, recreational, home and health-care applications. In museums, the traditional keyboard and mouse setup can be replaced with a robot tour guidance system. The robot can detect which exhibitions the visitors are interested in and introduce them directly. This not only makes exhibitions more interesting, but also reduces the tour guidance personnel training cost for the museums. For recreational usage, users can substitute wired controllers with hand gestures and enjoy the hands-free control of electronic devices. In household uses, we can combine head movement with simple hand gestures to control air conditioners, lighting, and other home appliances. It may also be used to aid patients in all kinds of situations when their body mobility is limited. In this paper, we use a video camera and a PC to develop a face and gesture based HCI system. The proposed HCI system not only can detect face features in head-tilted situations, but also can recognize hand gestures correctly anywhere in the whole image. It is also robust to busy backgrounds and different clothing situations, extracting hand regions, and recognizing hand gestures efficiently using a trained neural network. In applications, we apply the proposed HCI system to a real-life scenario. We give