International Journal of Electrical and Computer Engineering (IJECE) Vol. 14, No. 5, October 2024, pp. 6016~6022 ISSN: 2088-8708, DOI: 10.11591/ijece.v14i5.pp6016-6022 6016 Journal homepage: http://ijece.iaescore.com Estimation of kernel density function using Kapur entropy Leena Chawla, Vijay Kumar, Arti Saxena Department of Applied Sciences, School of Engineering and Technology, Manav Rachna International Institute of Research and Studies, Faridabad, India Article Info ABSTRACT Article history: Received Apr 13, 2024 Revised Jul 11, 2024 Accepted Jul 17, 2024 Information-theoretic measures play a vital role in training learning systems. Many researchers proposed non-parametric entropy estimators that have applications in adaptive systems. In this work, a kernel density estimator using Kapur entropy of order α and type β has been proposed and discussed with the help of theorems and properties. From the results, it has been observed that the proposed density measure is consistent, minimum, and smooth for the probability density function (PDF) underlying given conditions and validated with the help of theorems and properties. The objective of the paper is to understand the theoretical viewpoint behind the underlying concept. Keywords: Entropy estimator Information theoretic measure Kapur entropy Kernel density estimation Non-parametric estimator This is an open access article under the CC BY-SA license. Corresponding Author: Vijay Kumar Department of Applied Sciences, School of Engineering and Technology, Manav Rachna International Institute of Research and Studies Faridabad, Haryana, India Email: vijaykumar.set@mriu.edu.in 1. INTRODUCTION Entropy estimation plays a significant role across the disciplines of science and technology such as engineering [1]–[3], biology [4], and physics [5]. Mathematical generalization of non-parametric entropy in terms of continuous random variables has been proposed by many researchers. Univariate and multivariate probability density functions (PDFs)-based Shannon entropy expressions were discussed in [6], [7]. In a simple parametric family, the changing PDF of the data may not exist. Therefore, it is necessary to estimate non-parametric entropy. These estimates are obtained by introducing a density estimator of the data in the entropy expression instead of the actual PDF. PDFs are required to estimate entropy, which can effectively evaluate entropy using kernel density methods, a well-studied area of research. Wegman and Davies [8] introduced a recursive density estimator to estimate time series and spatial data parameters. Kernel density estimators are widely used for the estimation of entropy [9] because they are computationally faster, easy to incorporate, and simple to understand [10]. 2. PLUG-IN ESTIMATORS Plug-in estimators are used to estimate a feature of probability distribution. Different plug-in estimators have been used for density estimation, such as integral estimates, resubstituting estimates, splitting data estimates, and cross-validation estimates. A brief introduction to the different types of plug-in estimators is discussed as follows. − Integral estimates: approximate infinite integrals presented in the entropy expression. Entropy measures have been used to estimate the exact evaluation of the integral. Dmitriev and Tarasenko [11] proposed a