EUROGRAPHICS 2014/ S. Lefebvre and M. Spagnuolo STAR – State of The Art Report State of the Art Report on Real-time Rendering with Hardware Tessellation H. Schäfer 1 and M. Nießner 2 and B. Keinert 1 and M. Stamminger 1 and C. Loop 3 1 University of Erlangen-Nuremberg; 2 Stanford University; 3 Microsoft Research Abstract For a long time, GPUs have primarily been optimized to render more and more triangles with increasingly flexible shading. However, scene data itself has typically been generated on the CPU and then uploaded to GPU memory. Therefore, widely used techniques that generate geometry at render time on demand for the rendering of smooth and displaced surfaces were not applicable to interactive applications. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit’s ability to overcome this limitation, complex geometry can now be generated within the GPU’s rendering pipeline on the fly. GPU hardware tessellation enables the generation of smooth parametric surfaces or application of displacement mapping in real-time applications. However, many well-established approaches in offline rendering are not directly transferable, due to the limited tessellation pat- terns or the parallel execution model of the tessellation stage. In this state of the art report, we provide an overview of recent work and challenges in this topic by summarizing, discussing and comparing methods for the rendering of smooth and highly detailed surfaces in real-time. 1. Introduction Today’s graphics cards are massively parallel processors composed of up to several thousands of cores [Nvi12a]. While GPUs comprise a vast amount of raw computational power, they are mainly limited by memory bandwidth. In particular, this becomes a bottleneck for real-time render- ing techniques where highly detailed surface geometry needs to be updated (e.g., for animation) and rasterized in every frame. In order to tackle this problem, hardware tessellation was introduced along with the Xbox 360 [AB06] and the Di- rectX 11 API [Mic09]. The key idea is to generate highly detailed geometry on- the-fly from a coarser representation. Therefore, meshes are defined as a set of patches, rather than a purely triangle- based representation. At run-time the patches are sent to the GPU’s streaming processors, where they are directly re- fined and rasterized without further memory I/O. Tessella- tion densities are specified on a per-patch basis, enabling flexible level-of-detail schemes. Further, high-frequency ge- ometric detail can be added on-the-fly by displacing gener- ated vertices. This enables low-cost animations since only input patch control points need to be updated. Hardware tessellation has gained widespread use in com- puter games for the display of highly detailed, possibly an- imated objects. In the animation industry, where displaced subdivision surfaces are the typical modeling and rendering primitive, hardware tessellation has also been identified as a useful technique for interactive modeling and fast previews. Much of the work presented in this report has been incorpo- rated into OpenSubdiv [Pix12], an open source initiative driven by Pixar Animation Studios, for use in games and au- thoring tools. In the near future, hardware tessellation will also be available on mobile devices [Nvi13, Qua13], open- ing the door for new applications in mobile graphics. Although tessellation is a fundamental and well- researched problem in computer graphics, the availability of fast hardware tessellation has inspired researchers to develop and significantly advance techniques specifically crafted for hardware tessellation. This includes higher-order surface rendering methods that focus on different patch-based rep- resentations able to be processed by the tessellator. In par- ticular, much effort has been devoted to both accurately and approximately rendering subdivision surfaces, which are a modeling standard in the motion picture industry. Hardware tessellation is also ideally suited for displacement mapping, where high-frequency geometric detail is efficiently encoded as image data and applied as surface offsets at run-time. Sev- eral approaches for incorporating such high-frequency de- tails on top of smooth surfaces have been developed to date. c The Eurographics Association 2014.