Citation: Salim, F.; Saeed, F.; Basurra,
S.; Qasem, S.N.; Al-Hadhrami, T.
DenseNet-201 and Xception
Pre-Trained Deep Learning Models
for Fruit Recognition. Electronics 2023,
12, 3132. https://doi.org/10.3390/
electronics12143132
Academic Editor: Chunjie Zhang
Received: 1 June 2023
Revised: 10 July 2023
Accepted: 13 July 2023
Published: 19 July 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
electronics
Article
DenseNet-201 and Xception Pre-Trained Deep Learning Models
for Fruit Recognition
Farsana Salim
1
, Faisal Saeed
1,
* , Shadi Basurra
1
, Sultan Noman Qasem
2
and Tawfik Al-Hadhrami
3
1
DAAI Research Group, Department of Computing and Data Science, School of Computing and Digital
Technology, Birmingham City University, Birmingham B4 7XG, UK; farsana.salim@bcu.ac.uk (F.S.);
shadi.basurra@bcu.ac.uk (S.B.)
2
Computer Science Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud
Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; snmohammed@imamu.edu.sa
3
School of Science and Technology, Nottingham Trent University, Nottingham NG11 8NS, UK;
tawfik.al-hadhrami@ntu.ac.uk
* Correspondence: faisal.saeed@bcu.ac.uk
Abstract: With the dramatic increase of the global population and with food insecurity increasing,
it has become a major concern for both individuals and governments to fulfill the need for foods
such as vegetables and fruits. Moreover, the desire for the consumption of healthy food, including
fruit, has increased the need for applications in the field of agriculture that help to achieve better
methods for fruit sorting and fruit disease prediction and classification. Automated fruit recognition
is a potential solution to reduce the time and labor required to identify different fruits in situations
such as retail stores during checkout, fruit processing centers during sorting, and orchards during
harvest. Automating these processes reduces the need for human intervention, making them cheaper,
faster, and immune to human error and biases. Past research in the field has focused mainly on
the size, shape, and color features of fruits or employed convolutional neural networks (CNNs) for
their classification. This study investigates the effectiveness of pre-trained deep learning models for
fruit classification using two distinct datasets: Fruits-360 and the Fruit Recognition dataset. Four
pre-trained models, DenseNet-201, Xception, MobileNetV3-Small, and ResNet-50, were chosen for
the experiments based on their architecture and features. The results show that all models achieved
almost 99% accuracy or higher with Fruits-360. With the Fruit Recognition dataset, DenseNet-201
and Xception achieved accuracies of around 98%. The good results exhibited by DenseNet-201 and
Xception on both the datasets are remarkable, with DenseNet-201 attaining accuracies of 99.87% and
98.94%, and Xception attaining 99.13% and 97.73% accuracy, respectively, on Fruits-360 and the Fruit
Recognition dataset.
Keywords: DenseNet; fruit recognition; food security; MobileNetV3; pre-trained models; ResNet;
Xception
1. Introduction
It has become one of the main priorities of many governments globally to provide
enough food, including vegetables and fruits, to all their citizens. Moreover, there is an
increased need for smart solutions in the agricultural field to provide better decisions,
for instance in the applications utilized for fruit sorting and fruit disease prediction and
classification. The concept of fruit recognition refers to the automatic recognition, from their
images, of the exact type and variety of fruits. This classification is a challenging problem
due to the large number of varieties of fruits and vegetables. Though different fruits and
vegetables have distinguishable variations in physical features such as form, color, and
texture, the differences between varieties might not be easily noticeable in images. External
factors which affect the images including lighting conditions, distance, camera angle and
background further add to the complexity. Tang et al. [1] conducted a comprehensive
Electronics 2023, 12, 3132. https://doi.org/10.3390/electronics12143132 https://www.mdpi.com/journal/electronics