Pedestrian Detection: Exploring Virtual Worlds Javier Mar´ ın Computer Vision Center, Universitat Aut` onoma de Barcelona, Spain David Ger´ onimo, David V´ azquez, Antonio M. L´ opez Computer Vision Center and Computer Science Department, Universitat Aut` onoma de Barcelona, Spain 1 Introduction The objective of advanced driver assistance systems (ADAS) is to improve traffic safety by assisting the driver through warnings and by even automatically taking active countermeasures. Two examples of successfully com- mercialised ADAS are lane departure warnings and adaptive cruise control, which make use of either active (e.g., radar) or passive (e.g., cameras) sensors to keep the vehicle on the lane and maintain a safe distance from the preceding vehicle, respectively. One of the most complex safety systems are pedestrian protection systems (PPSs) (Bishop, 2005; Gandhi & Trivedi, 2007; Enzweiler & Gavrila, 2009; Ger´ onimo et al., 2010), which are specialised in avoiding vehicle-to-pedestrian collisions. In fact, this kind of accidents results in approximately 150000 injuries and 7000 killed pedestrians every year just in the European Union (UN-ECE, 2007). Similar statistics apply to the United States, while underdeveloped countries are increasing theirs year after year. In the case of PPSs, the most promising approaches make use of images as main source of information, as can be seen in the large amount of proposals exploiting them (Ger´ onimo et al., 2010). Hence, the core of a PPS is a forward facing camera that acquires images and processes them using Computer Vision techniques. In fact, the Computer Vision community has traditionally maintained a special interest in detecting humans, given the challenging topic it represents. Pedestrians are one of the most complex object to analyse: their variability in pose and clothes, distance to the camera, backgrounds, illuminations, occlusions make their detection a difficult task. In addition, the images are acquired from a mobile platform, so traditional human detection algorithms such as background subtraction are not applicable in a straightforward manner. Finally, the task has to be carried out in real-time. State-of-the-art detectors rely on machine learning algorithms trained with labelled samples, i.e., exam- ples (pedestrians), and counterexamples (background). Therefore, in order to build robust pedestrian detec- tors the quality of the training data is fundamental. Last years various authors have publicly released their pedestrian datasets (Dalal & Triggs, 2005; Doll´ ar et al., 2009; Enzweiler & Gavrila, 2009; Wojek et al., 2009; Ger´ onimo et al., 2010) which have gradually become more challenging (bigger number of samples, new scenar- ios, occlusions, etc.). In the last decade, the traditional research process has been to present a new database con- taining images from the world, and then researchers developed new and improved detectors (Doll´ ar et al., 2011; Enzweiler & Gavrila, 2009; Tuzel et al., 2008; Lin & Davis, 2008). In this chapter, we explore the possibilities