Matching Mixtures of Trajectories for Human Action Recognition Michalis Vrigkas 1 , Vasileios Karavasilis 1 , Christophoros Nikou 1 , and Ioannis A. Kakadiaris 2 1 Department of Computer Science, University of Ioannina, Ioannina, Greece 2 Computational Biomedicine Lab, Department of Computer Science, University of Houston, Houston, Texas, USA {mvrigkas, vkaravas, cnikou}@cs.uoi.gr, ioannisk@uh.edu Abstract A learning-based framework for action representation and recognition relying on the description of an action by time series of optical flow motion features is presented. In the learning step, the motion curves representing each action are clustered using Gaussian mixture modeling (GMM). In the recognition step, the optical flow curves of a probe sequence are also clustered using a GMM, then each probe sequence is projected onto the training space and the probe curves are matched to the learned curves using a non-metric sim- ilarity function based on the longest common subsequence, which is robust to noise and provides an intuitive notion of similarity between trajectories. Also, canonical time warping is utilized to find an alignment between the mean trajectories. Finally, the probe sequence is categorized to the learned action with the maximum similarity using a nearest neighbor classification scheme. We also present a variant of the method where the lengths of the time series are reduced by dimensionality reduction in both training and test phases, in order to smooth out the outliers, which are common in these type of sequences. Experimental results on Weizmann, KTH, UCF Sports and UCF YouTube action databases demonstrate the effectiveness of the proposed method. Keywords: Human action recognition; Optical flow; Motion curves; Gaussian mixture modeling (GMM); Clustering; Dimensionality reduction; Longest common subsequence. Preprint submitted to Computer Vision and Image Understanding January 8, 2014