IJSRD - International Journal for Scientific Research & Development| Vol. 1, Issue 9, 2013 | ISSN (online): 2321-0613 All rights reserved by www.ijsrd.com 1796 AbstractThis paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human- computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training. I. INTRODUCTION Body language is an important way of communication among humans, adding emphasis to voice messages or even being a complete message by itself. Thus, gesture recognition systems could be used for improving human- machine interaction. This kind of human-machine interfaces would allow a human user to control remotely through hand postures a wide variety of devices. Different applications have been suggested, such as the contact-less control or home appliances for welfare improvement. In order to be able to represent a serious alternative to conventional input devices like keyboards and mice, applications based on computer vision like those mentioned above should be able to work successfully under uncontrolled light conditions, no matter what kind of background the user stands in front of. In addition, distorted and joints objects like hands mean an increased difficulty not only in the segmentation process but also in the shape recognition stage. Most work in this research field tries to solve the problem by using markers, using marked gloves, or requiring a simple background. Other approaches are based on complex representations of hand shapes, what makes them unavailable for their implementation in real time applications. A new vision-based framework is presented in this paper, which allows the users to interact with computers through hand postures, being the system adaptable to different light conditions and backgrounds. Its efficiency makes it suitable for real time applications. The present paper focuses on the various stages involved in hand posture recognition, from the original captured image to its final classification. Frames from video sequences are processed and analysed in order to remove noise, find skin tones and label every object pixel. Once the hand has been segmented it is identified as a certain posture or discarded, if it does not belong to the visual memory. This paper proposes a better way in which the background picture is taken in the beginning afterwards the background picture is subtracted from the picture to detect the area of interest, which makes it easier to recognize gestures. In today’s world the best means of Human Computer Interaction (HCI) is keyboard and mouse, this conventional input devices are very easy to use and are easily available and easy to learn. But today there is very less way disabled people can communicate with machine, this lead to development of a new kind of system which makes disabled people to easily communicate with the system, Hand Gesture Recognition System would be the best means for disabled people to communicate with system. II. GESTURE RECOGNITION If we remove ourselves from the world of computers and consider human human interaction for a moment we quickly realize that we utilize a broad range of gestures in communication. With respect to objects, we have a broad range of gesture that are almost universal, including pointing at objects, touching or moving objects, changing object shape, activating objects such as controls, or handling objects to others. This suggests that gestures can be classified according to their functions. This suggests that gestures can be classified according to their functions: 1) Semiotic: Those used to communicate meaningful information.[2] [3] [4] 2) Ergotic: Those used to manipulate the physical world and create artifact.[2] [3] [4] 3) Epistemic: Those used to learn from the environment through tactile or haptic exploration. [2] [3] [4] 4) Within these categories there may be further classification applied to gestures. We are primarily interested in how gestures can be used to communicate with a computer so we will be mostly concerned with empty-handed semiotic gestures. These can be further categorized according to their functionality as: 5) Symbolic Gestures: These are gestures that within each culture have a single meaning. An emblem such as the “OK” gesture is one such example.[2] [3] [4] 6) Deictic Gestures: These are the types of gestures most generally seen in HCI and are the gestures of pointing, or otherwise directing the listener’s attention to specific events or objects in environment. They are gestures made when someone says “Put That There”.[2] [3] [4] 7) Iconic Gestures: As the name suggests, these gestures are used to convey information about the size, shape or orientation of the object of discourse. They are gestures made when someone says “The Plane Flew Like This”, while moving their hand through the air like the flight path of the aircraft.[2] [3] [4] Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam Narendra V. Jagtap 1 Prof. R. K. Somani 2 Prof. Pankaj Singh Parihar 3 1, 2, 3 Department of Computer Science & Engineering 1, 2, 3 Institute of Technology & Management, Bhilawara (RTU), Rajasthan, India 1 narendra.vjagtap@gmail.com