International Journal of Electrical and Computer Engineering (IJECE) Vol. 13, No. 5, October 2023, pp. 5853~5864 ISSN: 2088-8708, DOI: 10.11591/ijece.v13i5.pp5853-5864 5853 Journal homepage: http://ijece.iaescore.com Convolutional auto-encoded extreme learning machine for incremental learning of heterogeneous images Sathya Madhusudanan, Suresh Jaganathan, Dattuluri Venkatavara Prasad Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, India Article Info ABSTRACT Article history: Received Dec 10, 2022 Revised Mar 29, 2023 Accepted Apr 7, 2023 In real-world scenarios, a system's continual updating of learning knowledge becomes more critical as the data grows faster, producing vast volumes of data. Moreover, the learning process becomes complex when the data features become varied due to the addition or deletion of classes. In such cases, the generated model should learn effectively. Incremental learning refers to the learning of data which constantly arrives over time. This learning requires continuous model adaptation but with limited memory resources without sacrificing model accuracy. In this paper, we proposed a straightforward knowledge transfer algorithm (convolutional auto-encoded extreme learning machine (CAE-ELM)) implemented through the incremental learning methodology for the task of supervised classification using an extreme learning machine (ELM). Incremental learning is achieved by creating an individual train model for each set of homogeneous data and incorporating the knowledge transfer among the models without sacrificing accuracy with minimal memory resources. In CAE-ELM, convolutional neural network (CNN) extracts the features, stacked autoencoder (SAE) reduces the size, and ELM learns and classifies the images. Our proposed algorithm is implemented and experimented on various standard datasets: MNIST, ORL, JAFFE, FERET and Caltech. The results show a positive sign of the correctness of the proposed algorithm. Keywords: Autoencoder Convolutional neural networks Extreme learning machine Heterogeneous data Incremental learning This is an open access article under the CC BY-SA license. Corresponding Author: Suresh Jaganathan Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering Chennai, Tamilnadu, India Email: sureshj@ssn.edu.in 1. INTRODUCTION Information systems widely differ in their form and application, but converts all the data into meaningful information. Real-world applications generate data in huge volumes, where the process of acquiring knowledge becomes complex. Irrespective of the type of data, which may be homogeneous (same feature set across the chunks) or heterogeneous (different feature set across the chunks) [1], the models generated from the systems must continually learn to predict or classify. Incremental learning (or constructive learning) [2], a machine learning technique introduced for continual learning, updates the existing model when data streams in continually. Figure 1 shows an incremental learning model, which helps grow the network with the data arrival belonging to new classes. It applies on classification and clustering applications, addressing the data availability and resource scarcity challenges. An incremental learning algorithm must meet these criteria [3], i) accommodating new classes, ii) minimal overhead for training new classes, iii) previously trained data must not be retrained, and iv) preserve previously acquired knowledge. Challenges faced by incremental learning are: a) Concept drift: refers to the changes observed in the data distribution over time [4], which falls under two categories: i) virtual concept drift or covariate concept drift where changes are seen in the data distribution