Passive Single Image-based Approach for Camera Steering in Face Recognition at-a-Distance Applications Eslam Mostafa, Moumen Elmelegy, Aly Farag CVIP Lab,University of Louisville Louisville, KY, USA eslam.mostafa,moumen.elmelegy,aly.farag@louisville.edu Abstract This paper investigates the problem of automatically steering one or more Narrow Field of View (NFOW) cam- eras to a target subject using only a single image from a reference NFOW camera, without the help of a Wide Field of View (WFOV) camera. To find the approximate distance of the subject from the reference camera, our algorithm uses information from facial biometrics, specifically the inter-pupil distance (IPD), and eye-to-lips distance(ELD). A trigonometric relationship is formulated to calculate the steering parameters of the other cameras. Moreover, a new robust facial feature detector is also proposed in order to estimate the required biometrics. Several experiments are reported to evaluate the proposed system. 1. Introduction Recently, there is a noticeable upsurge in biometric ap- plications due to advances in technology and higher secu- rity demands. There is a particular interest in biometric systems capable of acquiring data at-a-distance for inte- grated surveillance/identity tasks since active cooperation from the target may not be required. Key challenges to these at-a-distance systems include imperfect facial images and large coverage area. To deal with large coverage areas, Face Recognition At-a-Distance (FRAD) systems usually employ a multi-camera system. This also enables the 3D reconstruction of faces, which can improve the face recog- nition results. There has been a considerable amount of earlier effort on management of camera network in surveillance applica- tion. We will review prior work in this field, focusing on the efforts related to the system configuration and how one camera or more is/are steered to a particular subject given the subject is captured by the other camera for the purpose of facial image capture and recognition. Stillman et al. [15] developed a system for person recog- nition consisting of two static overlapped WFOV cameras Figure 1. A multi-NFOV camera surveillance system: the cameras are constantly moving to cover the whole area (1st row). Once a suspicious subject is detected by one camera (2nd row, left). At that time, the other camera can be imaging a completely different area (2nd row, middle). The goal is to steer this other camera to get the same target subject in its field- of-view (2nd row, right). and two NFOW cameras. The two overlapped WFOV cam- era are used to determine 3D location in real world coor- dinates of the subject using triangulation, then two NFOV are steered based on the calculated 3D position. Similarly, Hamppaur et al. [10] and Wheeler et al. [17] proposed a similar system from the point of view of how to locate the subject in 3D and they differ in the tracking method and how they detect the subject in WFOV camera. Krahnstoever et al. [5] increase the number of WFOV cameras from 2 to 4 for more coverage area, and accordingly increasing the ac- curacy of locating the subject in 3D world coordinates. All of these systems have one or more NFOV that are steered based on the location of the subject in world coordinates to