applied sciences Article Convolutional Neural Networks Using Skip Connections with Layer Groups for Super-Resolution Image Reconstruction Based on Deep Learning Hyeongyeom Ahn and Changhoon Yim * Intelligent Image Processing Laboratory, Konkuk University, Seoul 05029, Korea; gregandks@gmail.com * Correspondence: cyim@konkuk.ac.kr; Tel.: +82-2-450-4016 Received: 12 February 2020; Accepted: 10 March 2020; Published: 13 March 2020   Abstract: In this paper, we propose a deep learning method with convolutional neural networks (CNNs) using skip connections with layer groups for super-resolution image reconstruction. In the proposed method, entire CNN layers for residual data processing are divided into several layer groups, and skip connections with different multiplication factors are applied from input data to these layer groups. With the proposed method, the processed data in hidden layer units tend to be distributed in a wider range. Consequently, the feature information from input data is transmitted to the output more robustly. Experimental results show that the proposed method yields a higher peak signal-to-noise ratio and better subjective quality than existing methods for super-resolution image reconstruction. Keywords: convolutional neural networks; deep learning; super-resolution; image reconstruction; skip connection; layer group 1. Introduction Single image super-resolution (SISR) is a method to reconstruct a super-resolution image from a single low-resolution image [1,2]. The reconstruction of a super-resolution image is generally difficult because of various issues, such as blur and noise. Image processing methods such as interpolation were developed for this purpose before the advent of deep learning. Many applications are based on deep learning in the image processing and computer vision field [14]. The first solution for super-resolution reconstruction from a low-resolution image using deep learning with convolutional neural networks (CNNs) was the super-resolution convolutional neural network (SRCNN) method [1]. However, in the SRCNN method, learning was not performed well in deep layers. The very deep super-resolution (VDSR) method [2] is more efficient for learning in deep layers and achieves better performance than SRCNN for super-resolution reconstruction. Although VDSR has layers much deeper than those in SRCNN, it is efficient because it focuses on generating only residual (high-frequency) information by connecting the input data to the output of the last layer with a skip connection. However, in the VDSR method, the gradient information vanishes, owing to repeated rectified linear unit (ReLU) operations. It was observed that the number of hidden data units with vanishing gradients increases as the training proceeds with many iterations [5]. To resolve the problem of gradient vanishing, batch normalization [6] can be applied, but it may cause data distortion and other negative effects for reconstructing super-resolution images. The super-resolution image reconstruction performance of the VDSR method [2] is significantly better than that of SRCNN because it uses deep layers and a skip connection. The skip connection is applied only once between the input data and the output of the last layer in the existing VDSR method [2]. Appl. Sci. 2020, 10, 1959; doi:10.3390/app10061959 www.mdpi.com/journal/applsci