Research Article
Deep Transfer Learning Based Multiway Feature Pyramid
Network for Object Detection in Images
Parvinder Kaur ,
1
Baljit Singh Khehra ,
2
and Amar Partap Singh Pharwaha
3
1
Research Scholar, IKG PTU, Jalandhar, Punjab, India
2
Department of CSE, BBSBEC Fatehgarh Sahib, Fatehgarh Sahib, India
3
Department of ECE, SLIET Longowal, Longowal, India
Correspondence should be addressed to Baljit Singh Khehra; baljit.singh@bbsbec.ac.in
Received 20 January 2021; Revised 23 March 2021; Accepted 3 April 2021; Published 19 April 2021
Academic Editor: Vijay Kumar
Copyright©2021ParvinderKauretal.isisanopenaccessarticledistributedundertheCreativeCommonsAttributionLicense,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Object detection is being widely used in many fields, and therefore, the demand for more accurate and fast methods for object
detection is also increasing. In this paper, we propose a method for object detection in digital images that is more accurate and
faster. e proposed model is based on Single-Stage Multibox Detector (SSD) architecture. is method creates many anchor
boxes of various aspect ratios based on the backbone network and multiscale feature network and calculates the classes and
balances of the anchor boxes to detect objects at various scales. Instead of the VGG16-based deep transfer learning model in SSD,
we have used a more efficient base network, i.e., EfficientNet. Detection of objects of different sizes is still an inspiring task. We
have used Multiway Feature Pyramid Network (MFPN) to solve this problem. e input to the base network is given to MFPN,
and then, the fused features are given to bounding box prediction and class prediction networks. Softer-NMS is applied instead of
NMS in SSD to reduce the number of bounding boxes gently. e proposed method is validated on MSCOCO 2017, PASCAL
VOC 2007, and PASCAL VOC 2012 datasets and compared to existing state-of-the-art techniques. Our method shows better
detection quality in terms of mean Average Precision (mAP).
1. Introduction
Object detection is flouted into an extensive room of
enterprises, with uses ranging from security to efficacy in
the working environments. One very simple application
can be locating the lost keys in a messy room. Other
applications are surveillance, unmanned vehicles,
counting the number of people in a scene, filtering, sa-
lacious images on the Internet, detecting abnormalities in
scenes such as bombs, real-time vehicle detection in
metro cities, machine investigation, image retrieval, face
detection, pedestrian detection, activity recognition,
human-computer interaction, service robots, and many
more [1]. e beginning of the last decade was very lucky
for deep learning due to the increased computational
speed of GPU and the availability of extremely large
datasets that contain millions of labeled data. ese two
proved booms to deep learning and object detection, and
a series of object detection and localization methods
started [2]. Overfeat [3] was proposed by Sermanet et al.
in 2014. It used a single convolution neural network to
perform classification, detection, and localization of
objects in images. It also emphasizes on the concept that
avoiding the training of background allows the network
to focus on positive classes merely. However, in this
method, they were not backpropagating through the
whole network. R-CNN (Region with CNN features) [4]
wasproposedbyGirshicketal.in2014.Itwasanexcellent
achievement in the field of object detection. It combined
the concept of region proposal with CNN. Selective
search was used to extract 2000 regions from the image,
and these regions were called region proposals. Support
Vector Machine (SVM) was used for detection of objects.
It gave 30 percent better performance over the existing
methods. However, still this algorithm takes a large
amount of time to train the network. Erhan et al. in 2014
Hindawi
Mathematical Problems in Engineering
Volume 2021, Article ID 5565561, 13 pages
https://doi.org/10.1155/2021/5565561