OR-PCA with Dynamic Feature Selection for Robust Background Subtraction Sajid Javed School of Computer Science and Engineering Kyungpook National University 80 Daehak-ro, Buk-go, Daegu, 702-701, Republic of Korea sajid@vr.knu.ac.kr Andrews Sobral Laboratoire L3I Université de La Rochelle 17000, France andrews.sobral@univ- lr.fr Thierry Bouwmans Laboratoire MIA Université de La Rochelle 17000, France thierry.bouwmans@univ- lr.fr Soon Ki Jung ∗ School of Computer Science and Engineering Kyungpook National University 80 Daehak-ro, Buk-go, Daegu, 702-701, Republic of Korea skjung@knu.ac.kr ABSTRACT Background modeling and foreground object detection is the first step in visual surveillance system. The task be- comes more difficult when the background scene contains significant variations, such as water surface, waving trees and sudden illumination conditions, etc. Recently, subspace learning model such as Robust Principal Component Analy- sis (RPCA) provides a very nice framework for separating the moving objects from the stationary scenes. However, due to its batch optimization process, high dimensional data should be processed. As a result, huge computational com- plexity and memory problems occur in traditional RPCA based approaches. In contrast, Online Robust PCA (OR- PCA) has the ability to process such large dimensional data via stochastic manners. OR-PCA processes one frame per time instance and updates the subspace basis accordingly when a new frame arrives. However, due to the lack of fea- tures, the sparse component of OR-PCA is not always ro- bust to handle various background modeling challenges. As a consequence, the system shows a very weak performance, which is not desirable for real applications. To handle these challenges, this paper presents a multi-feature based OR- PCA scheme. A multi-feature model is able to build a ro- bust low-rank background model of the scene. In addition, a very nice feature selection process is designed to dynami- cally select a useful set of features frame by frame, according ∗ Prof. Jung is a corresponding author. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SAC’15 April 13-17, 2015, Salamanca, Spain. Copyright 2015 ACM 978-1-4503-3196-8/15/04...$15.00. http://dx.doi.org/10.1145/2695664.2695863 to the weighted sum of total features. Experimental results on challenging datasets such as Wallflower, I2R and BMC 2012 show that the proposed scheme outperforms the state of the art approaches for the background subtraction task. Categories and Subject Descriptors I.4.9 [Image Processing and Computer Vision]: Appli- cations. General Terms System, Algorithm Keywords Multiple features, Online Robust-PCA, Feature selection, Foreground detection, Background modeling 1. INTRODUCTION Separating moving objects from video sequence is the first step in many computer vision and image processing applica- tions. This pre-processing step consists of isolation of mov- ing objects called “foreground” from the static scene called “background”. However, it becomes really hard task when the scene has sudden illumination change or geometrical changes such as waving trees, water surfaces, etc. [6] Many algorithms have been developed to tackle the chal- lenging problems in the background subtraction (also known as foreground detection) [6], [5]. Among them, Robust Prin- cipal Component Analysis (RPCA) based approach shows a very nice framework for separating foreground objects from highly dynamic background scenes. Excellent surveys on background subtraction via RPCA can be found in [1]. Although RPCA based approach for background subtrac- tion attracts a lot of attention, it currently faces some limi- tations. First, the algorithm includes batch optimization. In order to decompose an input image A into low-rank matrix L and sparse component S, a chunk of samples are required