Identifying Planes in Point Clouds for Efficient Hybrid Rendering Roland Wahl Michael Guthe Reinhard Klein Universit¨ at Bonn, Computer Graphics Group {wahl, guthe, rk}@cs.uni-bonn.de Abstract We present a hybrid rendering technique for high- feature colored point clouds that achieves both, high per- formance and high quality. Planar subsets in the point cloud are identified to drastically reduce the number of vertices, thus saving transformation bandwidth at the cost of the much higher fill-rate. Moreover, when render- ing the planes, the filtering is comparable to elaborate point-rendering methods but significantly faster since it is supported in hardware. This way we achieve at least a 5 times higher performance than simple point render- ing and a 40 times higher than a splatting technique with comparable quality. The preprocessing performance is orders of magnitude faster than comparable high quality point cloud simplification techniques. The plane detection is based on the random sample consensus (RANSAC) approach, which easily finds multi- ple structures without using the expensive Hough trans- form. Additionally, we use an octree in order to identify planar representations at different scales and accuracies for level-of-detail selection during rendering. The octree has the additional advantage of limiting the number of planar structures, thereby making their detection faster and more robust. Furthermore, the spatial subdivision facilitates handling out-of-core point clouds, both in pre- processing and rendering. Keywords: hybrid rendering, point cloud, point render- ing, plane detection, level-of-detail, out-of-core 1 Introduction In the recent years 3D scanners have become a com- mon acquisition device. Since these scanners produce point clouds and rendering of point primitives is sim- ple and relatively fast, rendering of such point clouds has become an important area of research. Of course not only the geometry of an object is captured, but also color or other material properties. Furthermore, points are also a reasonable primitive for extremely detailed Figure 1. Scanned Welfenschloss point cloud exhibit- ing high frequency material details. models. Whenever the triangles of a mesh mostly project to at most one pixel, rendering a point cloud is more effi- cient. On current graphics hardware, the fill-rate is 10 to 20 times higher than the vertex transformation rate. There- fore, interactive rendering algorithms try to replace pixel- sized points by primitives covering multiple fragments. These can either be polygons or more complex point primitives like splats or surfels. However, if the scanned object is textured or features high frequency geometry, such a simplification of the model is not possible because it would remove important information. To preserve the appearance of a model for all levels of the hierarchy, a reduction operation (i.e. merging two close-by vertices) can only be performed if either the normal and color of the first vertex comply with those of the second one, or the vertices have a distance which is less than the al- lowed approximation error. Since the first case is only true for smooth variations of normal and color on the ob- ject, the vertex distance roughly equals the approximation error for models with high frequency details. For pixel accurate rendering, this again leads to primitives which basically cover single pixels. Therefore, this approach,