International Journal of Computer Vision 32(2), 87–109 (1999) c 1999 Kluwer Academic Publishers. Manufactured in The Netherlands. Improving Depth Image Acquisition Using Polarized Light A.M. WALLACE, B. LIANG, E. TRUCCO AND J. CLARK Department of Computing and Electrical Engineering, Heriot-Watt University, Edinburgh EH14 4AS, Scotland andy@cee.hw.ac.uk bojian@cee.hw.ac.uk mtc@cee.hw.ac.uk jclark@cee.hw.ac.uk Received January 22, 1997; Revised October 11, 1998 Abstract. Control of the source and analysis of the polarization properties of the reflected light in a laser rangefinder based on triangulation offer a potential solution to the problem of distinguishing the primary laser stripe from un- wanted inter-reflections caused by holes and concavities on metal surfaces. In this paper, the established polarization theory of first and subsequent inter-reflections from metallic surfaces is reviewed. This provides a point of compa- rison for ellipsometric measurements which verify the particular applicability of the microfacet surface model in our context. We demonstrate how a conventional laser rangefinder can be modified to discriminate between primary and secondary reflections. However, our experiments on third and subsequent reflections show that more complex models are required to provide complete resolution of the problem. Furthermore, error analysis demonstrates the requirement for very precise control of the source and receiving optoelectronics. We conclude by demonstrating the acquisition of a depth image with and without polarization optics and discuss the significance of our results for laser depth measurement. Keywords: depth sensing, triangulation, metallic reflections, polarization, vision 1. Introduction The most common methods of active depth sensing are by triangulation and time-of-flight of a laser projection (Besl, 1988; Nitzan, 1988). In a typical triangulation system, a source laser beam is stretched by a cylin- drical lens to project a stripe onto the surface of the object or objects in the scene; the imaging camera or cameras compute the 3D position from the position of that stripe in the image plane when acquired from a displaced viewpoint. In principle, the image of the stripe intersects each row at most once, so that the range z can be linked directly to the single x coordinate. Precise location of the stripe in the image is criti- cal. The simplest approach is to scan each row of the image to detect peaks in intensity, locating the posi- tion of the first or largest response with subpixel ac- curacy (Trucco et al., 1998). However, if the scene contains objects which have high specular reflectivity then there may be several peaks in the signal along a single row, caused by the primary (“true”) reflection, secondary and subsequent (“false”) inter-reflections of the laser light. In general, it is not possible to distinguish the true from false reflections on the basis of intensity alone, since the secondary and higher order reflections may well be brighter than the primary signal. In Fig. 1, a 3D point P is illuminated by the laser beam, but secondary reflection from P causes a brighter spot at P s . If this reflection is detected by the right camera, then the apparent depth of the surface is at P r . Simi- larly, the left camera may measure the depth at P l . Thus, false detection of the secondary reflections can cause incorrect, outlying depth values in the acquired depth image. Particular problems occur when images of metallic objects with holes and concavities are acquired.