An Innovative Approach of Textile Fabrics
Identification from Mobile Images using Computer
Vision based on Deep Transfer Learning
Antonio Carlos da Silva Barros
‡§
, Elene Firmeza Ohata
*§
, Suane Pires P. da Silva
*§
,
Jefferson Silva Almeida
§†
and Pedro Pedrosa Rebouc ¸as Filho
*§
*
Programa de P´ os-Graduac ¸˜ ao em Engenharia de Teleinform´ atica (PPGETI),
Universidade Federal do Cear´ a, Fortaleza, Cear´ a, Brazil
†
Programa de P´ os-Graduac ¸˜ ao em Engenharia El´ etrica (PPGEE),
Universidade Federal do Cear´ a, Fortaleza, Cear´ a, Brazil
‡
Universidade da Integrac ¸˜ ao Internacional da Lusofonia Afro-Brasileira (Unilab), Fortaleza, Cear´ a, Brazil
§
Laborat´ orio de Processamento de Imagens, Sinais e Computac ¸˜ ao Aplicada (LAPISCO),
Instituto Federal do Cear´ a, Fortaleza, Cear´ a, Brazil
Email: {carlosbarros, elene.ohata, suanepires, jeffersonsilva}@lapisco.ifce.edu.br, pedrosarf@ifce.edu.br
Abstract—The identification of different textile fabrics is a task
commonly learned in practice and, therefore, is considered a very
strenuous and costly form of learning, causing annoyance to the
individual who performs it. Based on this context, this paper
proposes a new method for classifying textile fabrics, based on
the development of a computer vision system using Convolutional
Neural Network (CNN). CNN works as a feature extractor by
incorporating the concept of Transfer Learning. Using Transfer
Learning allows a pre-trained CNN model to be reused for a
new problem. In order to highlight the high performance of
CNN, an analysis is performed with feature extractors established
in the literature. Parameters such as Accuracy, F1-Score, and
processing time are considered to evaluate the efficiency of the
proposed approach. For the classification were used Bayesian
Classifier, Multi-layer Perceptron (MLP), k-Nearest Neighbor
(kNN), Random Forest (RF), and Support Vector Machine
(SVM). The results show that the best combination is the
CNN architecture DenseNet201 with SVM (RBF), obtaining an
accuracy of 94% and F1-Score of 94.2%.
Index Terms—Textile Fabrics, Convolutional Neural Network,
Transfer Learning, Computer Vision
I. I NTRODUCTION
The need for man to use a fabric that can cover his body
goes back to the beginnings when there was a need for man to
protect his body from cold or heat. The use and manufacture
of fabrics became a basic need and later an item of luxury and
social status. Clothing evolved with humanity and became a
reflection of social, political, religious, and moral aspects of
all stages experienced by human beings [1], [2].
Nowadays, because of the Industrial Revolution and sub-
sequent scientific progress, few people need to know how
to spin or weave, but they need to know how to judge the
quality of the cloths made by the machines considering their
duration. Therefore, the study of textiles becomes essential for
all consumers, as well as the manufacture of the cloth, and the
identification of the fiber assumes greater significance [1], [2].
There are people who learn to judge the quality of fabrics,
through practice and experience, over time, but the method of
”learning by making mistakes” is costly and full of hassles for
a human being [1], [2].
The computer vision area has solved several problems. It
is a research field that has helped humankind to customize
and automate different tasks. For instance, it can be used to
aid in the textile industry tasks. In the literature, some works
have applied this field of knowledge to the classification of
textile fibers, classification of flat fabrics, detection of defects,
or inspection. In [3], an automatic system for identification
of fabric structure was developed, employing the principal
component analysis (PCA) and fuzzy clustering. In [4], the
authors used the Local Binary Patterns and Gray-Level Co-
occurence Matrix, along with artificial neural networks, to
detect fabric defects. In [5], the authors proposed an algorithm
for detecting defects in fabrics, which was based on biological
vision modeling. In [6], a method based on lattice segmen-
tation and lattice templates was developed to automatically
identifies the defects of fabric images. While in [7], the authors
presented a method for detecting fabric defects based on
autoencoder. In [8], the authors used a CNN to identify fabrics,
but they used a dataset with 19,894 images.
This article proposes an innovative approach to classify tex-
tile fabrics. The approach consists of the use of CNN for fea-
ture extraction, based on the definition of Transfer Learning.
We evaluated the deep extractors with five classifiers. Because
it is a complex method, vision-based classification relies on
accurate calculations and fast processing times. Thus, to verify
the performance of each classifier, two evaluation metrics were
used: Accuracy (Acc) and F1-Score (F1S). Criteria such as
extraction time and classification time were also measured.
The results show that DenseNet201, DenseNet169, and
DenseNet121 combined with SVM reached 94.35%, 93.34%,
and 93.52%, respectively, in Accuracy, demonstrating to be
978-1-7281-6926-2/20/$31.00 ©2020 IEEE