18374 IEEE SENSORS JOURNAL, VOL. 23, NO. 16, 15 AUGUST 2023
Rough Entropy-Based Fused Granular Features
in 2-D Locality Preserving Projections for
High-Dimensional Vision Sensor Data
Saibal Ghosh , Pritam Paral , Member, IEEE, Amitava Chatterjee , Senior Member, IEEE,
and Sugata Munshi , Member, IEEE
Abstract —Locality preserving projection (LPP) is a mani-
fold learning-based nonlinear dimensionality reduction (DR)
technique, which has seen successful implementations in
pattern recognition problems. In the case of 2-D images,
if they are vectorized in 1-D shapes before applying LPP,
significant spatial neighborhood information can be lost.
Two-dimensional LPP (2DLPP) can overcome this problem
but suffers when data are susceptible to noise, outliers, and
intensity variation. To address these issues, we propose a
novel rough entropy-based granular fusion (REGF) scheme
to capture the intensity variation in the data in the form
of feature information and propose hybridization of REGF
with 2DLPP for feature extraction. The fusion technique
simultaneously imbibes good features of crisp granulation (CG) and quad-tree decomposition (QTD), and considers the
uncertainties caused by these homogenous and non-homogeneous granulation techniques in defining the indiscernible
image regions. Moreover, it works in the RGB color space to alleviate the loss of information encountered by the
conventional granulation techniques in the gray space. Extensive experimental studies in a real-world vision sensor-
based human–robot interaction (HRI) framework have been conducted to demonstrate the effectiveness of the proposed
granular computing (GrC)-based technique, especially in rugged environments.
Index Terms— Granular computing (GrC), granulated feature fusion, human–robot interaction (HRI), locality preserving
projections (LPP), rough entropy (RE), vision sensing.
I. I NTRODUCTION
I
N DIMENSIONALITY reduction (DR), the low-
dimensional representation of data aims to preserve
important intrinsic information of raw data, which makes
it suitable in machine learning applications such as image
analysis, data retrieval, and pattern recognition [1]. Principal
component analysis (PCA), factor analysis (FA), linear
discriminant analysis (LDA), and so on are some of the
well-established linear DR techniques. However, depending
Manuscript received 20 April 2023; revised 18 June 2023;
accepted 18 June 2023. Date of publication 26 June 2023; date of
current version 15 August 2023. This work was supported by the
All India Council for Technical Education (AICTE), Ministry of Human
Resource Development, Government of India through the “AICTE
Doctoral Fellowship” (ADF). The associate editor coordinating the review
of this article and approving it for publication was Prof. Yu-Dong Zhang.
(Corresponding author: Saibal Ghosh.)
Saibal Ghosh, Amitava Chatterjee, and Sugata Munshi are
with the Department of Electrical Engineering, Jadavpur Univer-
sity, Kolkata 700032, India (e-mail: saibal436ghosh@gmail.com;
amitava.chatterjee@jadavpuruniversity.in; sugatamunshi@yahoo.com).
Pritam Paral is with the Department of Electrical Engineering, IIEST,
Shibpur, Howrah 711103, India (e-mail: callinpritam@gmail.com).
Digital Object Identifier 10.1109/JSEN.2023.3288113
on the geometric structure of data, in some cases, linear
DR techniques cannot sufficiently explore its nonlinear
structure to retrieve the intrinsic information [2]. Hence,
augmented variants of linear DR techniques, such as kernel
PCA (KPCA) [3] and kernel LDA (KLDA) [4], have
been developed for nonlinear feature learning problems.
Manifold learning-based techniques can also capture the
low-dimensional intrinsic nonlinear structure of data.
Representative manifold learning techniques include isometric
feature mapping (ISOMAP), locally linear embedding (LLE),
Laplacian eigenmaps (LE), and local tangent space alignment
(LTSA) [2]. However, particular mapping functions cannot be
explicitly defined for these techniques, which results in an out-
of-sample problem for the data at hand [2]. To deal with this
problem, some linear approximations of nonlinear manifold
learning algorithms have been developed, e.g., neighborhood
preserving projection (NPP) [5], neighborhood preserving
embedding (NPE) [6], and locality preserving projections
(LPP) [7]. LPP addresses this problem by approximating the
linear LE and expanding the projection map in the ambient
space rather than only on the training samples [1]. In LPP,
1558-1748 © 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: JADAVPUR UNIVERSITY. Downloaded on September 29,2023 at 10:34:05 UTC from IEEE Xplore. Restrictions apply.