Augmented Compression for Server-Side Rendering Fabian Giesen, Ruwen Schnabel and Reinhard Klein University of Bonn Computer Graphics Group Email: {giesen,schnabel,rk}@cs.uni-bonn.de Abstract In this work we recall attention to problems that arise in a client-server setting with server-side ren- dering and propose a practical method for acceler- ated high-quality render-stream compression on the server. Server-side rendering is gaining importance for three main reasons: On the one hand great dif- ferences in quality and performance of client’s sys- tems (ranging from PDAs to high-end workstations) complicate application development, on the other hand 3D content providers refrain from transmitting costly 3D data to clients and last but not least no ad- equate and widely accepted standardized 3D inter- change format exists. A major challenge is the high server workload per client. To address one factor of the server load, we describe an augmented compres- sion of server-side renderings that produces stan- dard video streams but exploits the additional infor- mation available through image warping for motion estimation. 1 Introduction Over the course of the last decade, 3D hardware has become cheap and commonplace; an average new PC has better graphics capabilities than dedicated workstations had 10 years ago. As a result, applica- tions that make extensive use of 3D graphics are be- coming increasingly widespread and popular.It also means that standards have risen considerably: even relatively cheap hardware is able to render scenes with millions of visible triangles and hundreds of megabytes of texture data at interactive rates. At the same time, 3D display of one sort or another has ap- peared even in relatively weak embedded and mo- bile devices; an example are car navigation systems, which by now typically show a (relatively crude) 3D rendition of the area surrounding the car. But while 3D rendering is starting to become a commodity, there is a huge variation in the available levels of performance and quality. On PCs, high-end graphics cards not only provide a far larger featureset than integrated graphics chips, they are also between 1 or 2 orders of magni- tude faster. For embedded and mobile devices, the differences are even bigger, ranging from graph- ics chips that only provide a framebuffer with no hardware-accelerated rendering at all through hard- ware support for 2D vector graphics to fully-fledged 3D chipsets roughly on par with high-end PC ren- dering hardware around 2001. This creates a big problem for application de- velopers: the only way to achieve consistent qual- ity and performance over a wide range of different target machines is to either have separately tuned datasets and renderers for different configurations, which is very expensive to develop, or to aim for the lowest common denominator, which means that the added capabilities of newer hardware don’t get used at all. Another problem for client-side rendering is the need to distribute the actual 3D content to clients. Especially if the acquisition or creation of content is a costly process or the content contains vital busi- ness or technical information, owners agree, if at all, only very reluctantly to its distribution. For in- stance, with systems such as Google Earth, the po- tential userbase is everyone with access to the Inter- net. This is a problem for providers of GIS (geo- graphical information system) datasets: this data is quite costly to obtain, and making it available to vir- tually everyone free of charge is not always in their best interest. Finally, just as development of 3D hardware does not stand still, neither does rendering and geom- etry/material scanning technology, with the result that data formats go in and out of fashion every few years, always being replaced by a representa- tion that is more suitable for the current state of the VMV 2008 O. Deussen, D. Keim, D. Saupe (Editors)