1 The Appropriateness of k-Sparse Autoencoders in Sparse Coding Pushkar Bhatkoti School of Computing and Mathematics, Charles Sturt University, Australia p@bhatkoti.com Manoranjan Paul School of Computing and Mathematics, Charles Sturt University, Australia mpaul@csu.edu.au ABSTRACT - Learning of representations usually happens in different ways. Sometimes it persuades sparsity thus enhances performance through the task categorization. The sparse elements entail the learning algorithms that relate to the sparse-coding. Sometimes the algorithms have neural training networks with sparsity penalties and fines. The k-sparse autoencoder (KSA) model appears linear. The appropriateness of the model in sparse coding forms the foundation of this paper. Most important, the model appears speedily encoded and easily trained. Given these advantages, the model is suited for solving large-size issues or problems. We used openly available Mixed National Institute of Standard and Technology Database (MINST) and NYU Object Recognition Benchmark (NORB) dataset in supervisory and un-supervisory learning tasks to validate the hypothesis. The result of the paper shows that the traditional algorithms cannot resolve large size problems for sparse coding as the k-Sparse autoencoder model. Keywordsk-sparse autoencoder (KSA), Sparsity, algorithms, Sparse-coding (1) INTRODUCTION If the learning of representations happens in a way that persuades sparsity, enhanced performance is attained on categorization tasks [68]. The methods entail blends of sampling phases, activation functions, and various penalties. The learning algorithms (LAs) in sparse elements may be approaches related to sparse-coding as explained by Olshausen and Field [54]. In some cases, the algorithms are neural training networks, which have sparsity form of penalties as demonstrated by Nair and Hinton [73]. The methods consist of two phases. The first phase comprises of LAs, which generate a structured dictionary, D. D sparsely stands for the following data. (1) The second phase comprises of encoding algorithms (EAs). Based on D, the EAs characterize mappings from given input form of vectors, x, to the corresponding feature factors. Sparsity’s effectiveness may be determined by itself using KSA. As a model, the KSA is linear. The only hidden layers holding activities are defined as being k-highest (k). KSAs are easily trained, and speedily encoded. Thus, they are suitable for large-size problems. Such problems are not easily resolved using the traditional algorithms for sparse coding. The traditional encoders map x to hidden representations (Z) by applying the following function. Z = f (Px + b) Where {P, b} parameterizes the function. KSA is essentially a linear encoder with weights and activation functions [68]. To validate the appropriateness of KSA, we trained Deep Neural Network (DNN) with KSA classifier to get the optimal results on the MNIST and NORB dataset. 1.1 Statement of Problem The dictionary learning and sparse coding steps have appeared costly affairs in terms of computation. The computation problem is challenging practically during the coding. A wide- ranging search for research studies does not result into structured studies that examine how KSAs perform in discriminative unsupervised learning, and deep as well as shallow learning tasks. There is a need to determine the performance of KSAs, as methods for sparse encoding to attain precise sparsity within hidden representations. 1.2 Purpose of the study Examine the performance of the KSA model in shallow, unsupervised, and deep learning. Examine the performance of the model in achieving clear-cut sparsity within the concealed representation. Demonstrate how KSAs model can be applied and learned in sparse coding. 1.3 Research question At what level does the KSAs model achieve accurate sparsity within concealed representations? (2) LITERATURE REVIEW 2.1 How does the model perform in shallow, deep, and unsupervised learning tasks? 2.1.1 Performance of the model in unsupervised learning