NEAREST NEIGHBOUR CLASSIFICATION ON LASER POINT CLOUDS TO GAIN OBJECT STRUCTURES FROM BUILDINGS B. Jutzi a , H. Gross b a Institute of Photogrammety and Remote Sensing, Universität Karlsruhe, Englerstr. 7, 76128 Karlsruhe, Germany boris.jutzi@ipf.uni-karlsruhe.de b FGAN-FOM, Research Institute for Optronics and Pattern Recognition, Gutleuthausstraße 1, 76275 Ettlingen, Germany gross@fom.fgan.de KEY WORDS: Laser data, point cloud, classification, nearest neighbour, covariance, eigenvalues. ABSTRACT: The application of three dimensional building models has become more and more important for urban planning, enhanced navigation and visualization of touristy or historic objects. 3D models can be used to describe complex urban scenes. The automatic generation of 3D models using elevation data is a challenge for actual research. Especially extracting planes edges and corners of man made objects is of great interest. This paper deals with the automatic classification of points by utilizing the eigenvalues of the covariance within the close neighbourhood. The method is based on the analysis of 3D point clouds derived from Laser scanner data. For each 3D point additional structural features by considering the neighbourhood are calculated. Invariance with respect to position, scale and rotation is achieved by normalization of the features. For classification the derived features are compared with analytical calculated as well as trained feature values for typical object structures. For the generation of a training data set several point sets with different density and varying noise are generated and exploited. The result of the investigations is that the quality of the classification using the analytical eigenvalues as reference is not harmful in comparison to the trained data set for a small noise. Therefore for all structures presented here it is not necessary to use training data sets instead of an unsupervised classification based on the analytical eigenvalues. Weighting the calculated distances in the eigenvalue space dependent on the structure type improves the classification result. Due to this classification all points which may belong to a building edge are selected. Assembling these points to lines the 3D borders of the objects were achieved. The algorithm is tested for several urban scenes and the results are discussed. 1. INTRODUCTION Three-dimensional building models have become important during the past for various applications like urban planning, enhanced navigation or visualization of touristy or historic objects. They can increase the understanding and explanation of complex scenes and support the decision process of operation planning. The benefit for several applications by utilizing LIDAR data was demonstrated for instance by Brenner et al. (2001). For decision support and operation planning the real urban environment should be available. In most cases the object models of interest are not obtainable and especially in time critical situations the 3D models must be generated as fast and accurate as possible. Different approaches to generate the 3D models of urban scenes are discussed in the literature (Shan & Toth, 2008). Building models are typically acquired by (semi-) automatic processing of Laser scanner elevation data or aerial imagery (Baillard et al., 1999; Geibel & Stilla, 2000). LIDAR data can be utilized for large urban scenes (Gross & Thoennessen, 2005). The processing of raw full-waveform data to gain object structures of buildings was investigated by Jutzi et al. (2005) and the iterative processing to increase the set of 3D points of buildings by Kirchhof et al. (2008). Pollefeys (1999) uses projective geometry for a 3D reconstruction from image sequences. Fraser et al. (2002) use stereo approaches for 3D building reconstruction. Vosselman et al. (2004) describes a scan line segmentation method grouping points in a 3D proximity. Airborne systems are widely used but also terrestrial Laser scanners are increasingly available. The latter ones provide a much higher geometrical resolution and accuracy (mm vs. dm) and they are able to acquire fine building facade details which are an essential requirement for a realistic virtual visualization. In Section 2 the calculation of additional point features is described. The features are normalized with respect to translation, scale and rotation. In Section 3 typical constellations of points are discussed and discriminating features are presented. Examples for the combination of eigenvalues and structure tensor are shown. For typical situations analytical feature values are derived. For the classification procedure the results of the trained feature values are discussed in Section 4 and the trained values are compared with the analytical values. The generation of lines is described in Section 5. Points with the same eigenvectors are assembled and approximated by lines. The resulting 3D structures (boundaries) of objects are shown for the selected laser point cloud. In Section 6 the possibilities using additional features are summarized. Outstanding topics and aspects of the realized method are discussed. 2. EIGENVALUE ESTIMATION TO GAIN OBJECT STRUCTURES A Laserscanning device delivers 3D point measurements in an Euclidian coordinate system. For airborne systems mostly the height information is stored in a raster grid with a predefined resolution. Image cells without a measurement are interpolated by considering their neighbourhood. An example data set gathered by an airborne Laser scanner system (TopoSys®) as 3D points is shown in Figure 1a. The color corresponds to the height. A transformation to a raster image, selecting the highest value for each pixel and after filling missing pixels with a Median operation, yields to Figure 1b. Due to the filtering the image does not represent the original 3D information anymore. The horizontal position is slightly different and some of the height values are interpolated to fill the gaps even if there was no measured value available.