Dynamic Textures Segmentation with GPU Juan Manuel Rodr´ ıguez, Francisco G´omez Fern´ andez, Mar´ ıa Elena Buemi, and Julio Jacobo-Berlles Departamento de Computaci´ on, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, Argentina Abstract. This work addresses the problem of motion segmentation in video sequences using dynamic textures. Motion can be globally mod- eled as a statistical visual process know as dynamic texture. Specifically, we use the mixtures of dynamic textures model which can simultane- ously handle different visual processes. Nowadays, GPU are becoming increasingly popular in computer vision applications because of their cost-benefit ratio. However, GPU programming is not a trivial task and not all algorithms can be easily switched to GPU. In this paper, we made two implementations of a known motion segmentation algorithm based on mixtures of dynamic textures. One using CPU and the other ported to GPU. The performance analyses show the scenarios for which it is worthwhile to do the full GPU implementation of the motion segmenta- tion process. 1 Introduction Motion and texture are key characteristics for video interpretation. The recog- nition of textures in motion allows video analysis in the presence of water, fire, smoke, crowds, among others. Understanding these visual processes has been very challenging in computer vision. Some motion segmentation methods are based on optical flow [1,2]. This approach presents difficulties like aperture and noise problems. The classical solution is to regularize the optical flow field, how- ever, this produces unwanted effects in motion, smoothing edges or regions where the movement is smooth (for example, vegetation in outdoor scenes). To analyze the dynamical visual processes we need a model that can describe them. To fully understand the properties of dynamical visual processing, learning a model, given our measurements (a finite sequence of images), it is necessary to recover the scene that was generated. The recognition of textures in movement based on observed video sequences sampled from stochastic processes taking into account the variations in time and space is called dynamic textures (DT) [3]. Dynamic textures have been used for segmentation of visual processes in video. However, when multiple dynamic textures (possibly superimposed) occur in a same scene, this model is not capable of discriminating them well. To face this problem, a Mixture of Dynamic Textures (MDT) [4] model has been proposed, to handle this issue as a constituent part of the model. The MDT algorithm can classify a set of input video sequences into different categories, given the number L. Alvarez et al. (Eds.): CIARP 2012, LNCS 7441, pp. 607–614, 2012. c Springer-Verlag Berlin Heidelberg 2012