Contents lists available at ScienceDirect Signal Processing journal homepage: www.elsevier.com/locate/sigpro Optimal constraint vectors for set-membership ane projection algorithms Wallace A. Martins a, , Markus V.S. Lima a , Paulo S.R. Diniz a , Tadeu N. Ferreira b a DELDEE/Poli & PEE/COPPE, Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil b Fluminense Federal University, Rio de Janeiro, Brazil ARTICLE INFO Keywords: Adaptive signal processing Set-membership ltering Ane projection Convex optimization ABSTRACT There is a growing interest in adaptive ltering solutions whose learning processes are data selective, bringing about computational reduction and energy savings while improving estimation accuracy. The set-membership ane projection algorithms are a representative family of algorithms including data-selection mechanisms. The update process of these algorithms depends on the choice of a constraint vector (CV) which, up to now, is based on some heuristics. In this paper we propose an optimal CV and discuss some of its inherent properties. The resulting problem falls into a convex optimization framework, allowing some unexpected features to surface; for instance, the widely used simple choice CV is asymptotically optimal for statistically white stationary inputs. Simulations indicate the optimal CV outperforms the simple choice CV regarding update rates and steady-state mean squared errors for statistically colored inputs. 1. Introduction The set-membership (SM) algorithms [123] rely on the concept of set-membership ltering (SMF), which allows the reduction of compu- tational burden by updating the lter coecients only in the cases where the error is greater than a prescribed threshold, i.e., the innovation in the observed data is checked before the data are used in the learning process. This SMF property, known as data selection, is also responsible for the robustness against noise of the SM algorithms. The set-membership ane projection (SM-AP) algorithm is an interesting alternative to the AP algorithm [24], for it combines data reuse [2430] with data selection, thus yielding a computationally ecient algorithm with low steady-state mean squared error (MSE) and high convergence speed. The SM-AP generalizes many algorithms, including the set-membership normalized least mean square (SM- NLMS) [5], the set-membership binormalized LMS (SM-BNLMS) [8], and their non-SM counterparts with unit convergence factor [28]. The only dierence between the cost-functions dening the update rule of the SM-AP and AP algorithms is that the SM-AP does not impose a null a posteriori error constraint, whereas the AP does. In fact, the SM-AP constrains the a posteriori error to be equal to the so- called constraint vector (CV), which guarantees the a posteriori error will be within the aforementioned prescribed threshold. This fact along with the input-innovation check are the key aspects fostering the success of the SM-AP algorithm across dierent applications. Although CVs play a central role in SM-AP algorithms, all previous solutions employ heuristically dened CVs, such as the so-called simple choice CV (SC-CV) [7,8]. Besides the resulting good performance induced by those CVs, the heuristics employed to dene them nd reasonable geometrical justicationsee, for instance, [31], which somehow explains why using them is satisfactory instead of pursuing optimal CV solutions. Nonetheless, the question remains open: are there optimal CVs and how do they relate to the most commonly used heuristic solution SC-CV? This work answers these questions by recasting the problem in a convex optimization framework. 1.1. Main contributions It is shown how to solve the SM-AP original optimization problem. In fact, all previous works yield suboptimal solutions in general setups. The key aspect of the previous suboptimal solutions is their computational simplicity regarding the update process. It is shown when the widespread used SC-CV coincides with the optimal CV through geometric and formal algebraic arguments. A key result here is that the SC-CV is asymptotically optimal for statistically white inputs. http://dx.doi.org/10.1016/j.sigpro.2016.11.025 Received 5 August 2016; Received in revised form 26 October 2016; Accepted 29 November 2016 This work was supported by Faperj (E-26/201.390/2014, E-26/202.890/2015, and E-26/010.001573/2016), Capes (23038.009440/2012-42), and CNPq (304112/2013-5, 453868/2014-2, and 311153/2015-1), Brazilian Research Councils. Corresponding author. E-mail addresses: wallace.martins@smt.ufrj.br (W.A. Martins), markus.lima@ieee.org (M.V.S. Lima), diniz@smt.ufrj.br (P.S.R. Diniz), tadeu_ferreira@id.u.br (T.N. Ferreira). Signal Processing 134 (2017) 285–294 Available online 02 December 2016 0165-1684/ © 2016 Elsevier B.V. All rights reserved. MARK