Evaluating the significance of cutting planes of
wood samples when training CNNs for forest
species identification
Geovanni Figueroa-Mata
*
, Erick Mata-Montero
†
, Juan Carlos Valverde-Ot´ arola
‡
, Dagoberto Arias-Aguilar
‡§
*
School of Mathematics,
†
School of Computing,
‡
School of Forestry Engineering,
Costa Rica Institute of Technology,
Cartago, Costa Rica
Email:
*
gfigueroa@tec.ac.cr,
†
emata@tec.ac.cr,
‡
jcvalverde@tec.ac.cr,
§
darias@tec.ac.cr
Abstract—With the goal of quantifying the importance of each
of the cutting planes of wood samples in the training process of a
convolutional neural network that identifies forest species based
on images of those cutting planes, we propose a convolutional
model that is trained from scratch with images of transverse,
radial, and tangential sections of Costa Rican forest species wood
samples. The best Top1-accuracy achieved is 89.58% when the
network is trained with transverse sections only. Because this is
more than 20% better than the accuracy achieved when using
any of the other two sections individually, we conclude that this
is the most significant section of all three. This is consistent with
current practice of experts, who prefer this cutting plane when
conducting manual identifications based on anatomical features
of wood samples.
Index Terms—convolutional neural network, automated image-
based species identification, Costa Rican forest species, cutting
planes.
I. I NTRODUCTION
The anatomical identification of wood species at the macro-
scopic level is a manual process that requires a high degree
of knowledge to observe and differentiate certain anatomical
structures present in a wood sample [1]–[3]. A sample is
obtained after cutting through the different layers of trunk
tissue until the heartwood is reached. Three cutting planes
are made on it:
• Transverse section (X): It is also called cross section. It is
done perpendicular to the axis of the trunk and therefore
also perpendicular to the direction of the wood fiber (see
Figure 1).
• Radial section (R): It is parallel to the axis of the trunk
and the rays. In the radial section, parallel rays are cut
longitudinally (see Figure 1).
• Tangential section (T): It is produced by cutting the trunk
parallel to its axis tangentially with respect to the annual
rings. Rays are cut at right angles (see Figure 1).
The observation of anatomical structures is typically per-
formed on each of these cutting planes with the help of a
hand lens with at least 10X magnification [5], [6]. Then, by
using an identification key, an expert determines the species
of the sample.
Fig. 1. Cutting planes. (Taken from [4]).
In recent years, the problem of anatomical identification
of wood species has been approached from a computational
view point. Most researchers use predefined feature extraction
machine learning techniques such as gray-level co-occurrence
matrix (GLCM), local binary patterns (LBP), and scale-
invariant feature transform (SIFT), among others [7]–[15]. For
species identifications, classifiers such as support vector ma-
chine (SVM), artificial neural network (ANN), and k-Nearest
Neighbors (KNN) are ferequetly used too. Only recently, have
deep learning techniques been applied successfully to the
macroscopic identification of wood species [16]–[18].
In general, research on automated image-based identifica-
tion of wood samples use cross sections of wood to train and
validate their proposed algorithms, possibly because experts
use mainly cross sections when manually identifying wood
samples. However, to our knowledge, the individual contribu-
tion of each of these cuts to the training process of an image-
based learning algorithm has not been quantified. In this work,
we propose to measure it with a model based on convolutional
neural networks trained from scratch with images of cross,
radial, and tangential sections separately.
This paper is structured as follows: Section II presents the
methods and procedures used to obtain wood samples. In
addition, it describes the database and the data augmenta-
tion techniques employed to scale-up the number of images
per species. Section III describes the convolutional neuronal
network. Section IV presents the conducted experiments and
978-1-5386-6122-2/18/$31.00 ©2018 IEEE