EE367, WINTER 2017 1 Gaze Contingent Foveated Rendering Sanyam Mehra, Varsha Sankar {sanyam, svarsha}@stanford.edu Abstract—The aim of this paper is to present exper- imental results for gaze contingent foveated rendering for 2D displays. We display an image on a conventional digital display and use an eye tracking system to determine the viewers gaze co-ordinates in real time. Using a stack of pre-processed images, we are then able to determine the blur profile, select the corresponding image from the image stack and simulate foveated blurring for the viewer. We present results of a user study and comment on the employed blurring methodologies. Applications for this technique lie primarily in the domain of VR displays where only a small proportion of the pixels that are rendered lie in the foveal region of the viewer. Thus, promising to optimize computational requirements without compromising experience and viewer comfort. I. I NTRODUCTION G AZE contingent display techniques attempt to dynamically update the displayed content according to the requirements of the specific appli- cation. This paper presents one such technique that exploits the physiological behavior of the human visual system to modify the resolution in the pe- ripheral region while maintaining the resolution of regions in the foveal field of view. Extension of this technique promises computa- tional savings for rendering on planar and VR displays with expected increase of display field-of- view in the future. Standard psychophysical models suggest that the discernible angular size increases with eccentric- ity. Models like [7] predict visual acuity falls off roughly linearly as a function of eccentricity. The falloff is attributed to reduction in receptor density in the retina, as shown in Fig. 1, and reduced processing power in the visual cortex committed to the periphery. [6] suggests only a small proportion of pixels are in the primary field of view, especially for head-mounted displays (HMD). The growing trend towards rendering on devices like HMDs, portable gaming consoles, smartphones and tablets motivates the goal to minimize computation while maintaining perceptual quality. Fig. 1. Receptor density of the retina vs. eccentricity. Adapted from Patney et al. 2016 Given the acuity vs. eccentricity model predic- tions, the slope characterizing the falloff allows devising blurring methods i.e. angular span and the magnitude of the blur. Section IV presents analysis of the performance of gaze based foveated rendering. The resulting image is expected to appear similar to a full-resolution image, with reduction in the number of pixels required to be rendered, while still maintaining the same perception quality. Section V-E shares results of a user study conducted to evaluate effectiveness of the system with varying parameters, as mentioned above. Fig. 2 illustrates the practical test setup wherein the gaze location on the screen determines the regions that fall into focus, which in turn dictates the foveated blur. Ideally, the demonstration would require a system with an eye tracking enabled HMD. But, due to lack of readily available hardware, the experimental setup comprises of a platform for a 2D monitor integrated with the EyeTribe eye-tracker and a rendering pipeline that renders pre-processed images.