2017 4th International Conference on Signal Processing, Communications and Networking (ICSCN -2017), March 16 18, 2017, Chennai, INDIA 978-1-5090-4740-6/17/$31.00 ©2017 IEEE Flower Species Recognition System using Convolution Neural Networks and Transfer Learning I.Gogul, V.Sathiesh Kumar Department of Electronics Engineering Madras Institute of Technology, Anna University Chennai-600044, India gogulilangoswami@gmail.com AbstractAutomatic identification and recognition of medicinal plant species in environments such as forests, mountains and dense regions is necessary to know about their existence. In recent years, plant species recognition is carried out based on the shape, geometry and texture of various plant parts such as leaves, stem, flowers etc. Flower based plant species identification systems are widely used. While modern search engines provide methods to visually search for a query image that contains a flower, it lacks in robustness because of the intra-class variation among millions of flower species around the world. Hence in this proposed research work, a Deep learning approach using Convolutional Neural Networks (CNN) is used to recognize flower species with high accuracy. Images of the plant species are acquired using the built-in camera module of a mobile phone. Feature extraction of flower images is performed using a Transfer Learning approach (i.e. extraction of complex features from a pre-trained network). A machine learning classifier such as Logistic Regression or Random Forest is used on top of it to yield a higher accuracy rate. This approach helps in minimizing the hardware requirement needed to perform the computationally intensive task of training a CNN. It is observed that, CNN combined with Transfer Learning approach as feature extractor outperforms all the handcrafted feature extraction methods such as Local Binary Pattern (LBP), Color Channel Statistics, Color Histograms, Haralick Texture, Hu Moments and Zernike Moments. CNN combined with Transfer Learning approach yields impressive Rank-1 accuracies of 73.05%, 93.41% and 90.60% using OverFeat, Inception-v3 and Xception architectures, respectively as Feature Extractors on FLOWERS102 dataset. KeywordsDeep Learning, Artificial Intelligence, Convolutional Neural Networks, Transfer Learning, Flower Recognition I. INTRODUCTION Plant species recognition based on flower identification remain a challenge in Image processing and Computer Vision community mainly because of their vast existence, complex structure and unpredictable variety of classes in nature. Because of these natural complexities, it is highly undesirable to perform normal segmentation or feature extraction or combining shape, texture and color features which results in moderate accuracy on benchmark datasets. Although some feature extraction techniques combining global and local feature descriptors reaches state of the art accuracy in classifying flowers, still there is a need for a robust and efficient system to automatically identify and recognize flower species at a larger scale in complex environment. Saitoh and Kaneko [1] proposed a method to recognize flowers, where two images are needed, one of the flower and other of the leaf. This method requires the user to place a black cloth behind the flower to recognize it. This is not feasible and is inconvenient for the user to use this method in real time scenario. Some of the modern plant recognition systems namely Leafsnap [2], Pl@ntNet [3], ReVes [4] and CLOVER [5] are all based on leaf identification which requires domain knowledge of flowers. A methodology that combines morphological features such as aspect ratio, eccentricity, rectangularity and Moving Median Center (MMC) hypersphere classifier was proposed by J.-X. Du et al [6]. A novel approach to recognize and identify plants using shape, color and texture features combined with Zernike moments with Radial Basis Probabilistic Neural Networks (RBPNN) was proposed by Kulkarni et al [7]. A flower classification approach based on vocabulary of texture, color and shape features was proposed by Zisserman and tested on 103 classes [8]-[9]. To accurately recognize flowers in images, Salahuddin et al. proposed a segmentation approach that uses color clustering and domain knowledge of flowers [10]. Although numerous algorithms and methodologies have been proposed and implemented to recognize flowers and plants, they still seem to be quite difficult to analyze due to their complex 3D structure and high intra-class variation. II. GLOBAL FEATURE DESCRIPTORS When it comes to quantify flower images, three most important attributes to be considered are Color, Texture and Shape. 2.1 Color The first important feature to be considered to recognize flower species is “Color”. One of the most reliable and simple global feature descriptor is the Color Histogram which computes the frequency of pixel intensities occurring in an image. This enables the descriptor to learn about the distribution of each color in an image. The feature vector is taken by concatenating the count for each color. For example, if a histogram of 8-bins per channel is taken into consideration, then the resulting feature vector will be of 8x8x8 = 512-dfeature vector. In addition to it, simple color channel statistics such as mean and standard deviation could also be calculated to find the color distribution in an image.