Rolling Shutter Image Compensation Steven P. Nicklin, Robin D. Fisher, and Richard H. Middleton School of Electrical Engineering and Computer Science, The University of Newcastle, Callaghan 2308, Australia Abstract. This paper describes corrections to image distortion found on the Sony AIBO ERS-7 robots. When obtaining an image the camera captures each pixel in series, that is there is effectively a ’rolling shutter’. This results in a delay between the capture of the first and last pixel. When combined with movement of the camera the image produced will be distorted. The sensor values from the robot, coupled with knowledge of the camera’s timing, are used to calculate the effect of the robots move- ment on the image. This information can then be used to remove much of the distortion from the image. The correction improves the effectiveness of shape recognition and bearing-to-object accuracy. 1 Introduction Rolling shutters are commonly found in low-cost, low-power CMOS cameras. These cameras are being commonly used in many non-stationary and robotic applications. Cameras that contain rolling shutters do not expose the entire image at one instance, as is done with a global shutter. Instead rolling shutters have pixels that have been exposed at different times and merged together to form a single image. This causes problems when the scene changes in a time which is less than that taken to expose the entire image. This causes some pixels to have newer information than others. The combination between new and old information causes distortions in the image when viewing an object with movement relative to the camera. These distortions are evident on the CMOS cameras found in the Sony AIBO ERS-7 robots used in the RoboCup Four-Legged League. However these errors will also be found whenever similar camera technology is used in non-stationary cameras. A major contribution to this distortion is the desire to constantly move the camera to gather as much information about the surrounding environment as possible. These distortions can cause differences in the co-ordinates, as well as the shape of the objects that the robot has seen. Since the distortion stretches or compresses the image of objects, the co-ordinates derived from that image are also altered. The changed co-ordinates, in particular the bearing to an object, have the potential to cause problems when attempting to determine the location of the object. While the distortion causes problems when the shape of the object is used for its identification or measurement. An example of which is the circle fitting on a ball image in the Four-Legged robotic League. If the ball is no longer circular in shape, circle fitting loses some of its effectiveness. G. Lakemeyer et al. (Eds.): RoboCup 2006, LNAI 4434, pp. 402–409, 2007. c Springer-Verlag Berlin Heidelberg 2007