Research Article
A Novel Time-Incremental End-to-End Shared Neural
Network with Attention-Based Feature Fusion for Multiclass
Motor Imagery Recognition
Shidong Lian,
1,2,3
Jialin Xu ,
2,3
Guokun Zuo,
2,3
Xia Wei ,
1
and Huilin Zhou
2,3,4
1
College of Electrical Engineering, Xinjiang University, Urumqi 830047, China
2
Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering,
Chinese Academy of Sciences, Ningbo, Zhejiang 315201, China
3
Zhejiang Engineering Research Center for Biomedical Materials, Ningbo Institute of Materials Technology and Engineering,
Chinese Academy of Sciences, Ningbo, Zhejiang 315300, China
4
University of Chinese Academy of Sciences, Beijing 100049, China
Correspondence should be addressed to Jialin Xu; xujialin@nimte.ac.cn and Xia Wei; 30462111@qq.com
Received 15 October 2020; Revised 13 January 2021; Accepted 29 January 2021; Published 18 February 2021
Academic Editor: Pietro Aric` o
Copyright © 2021 Shidong Lian et al. is is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In the research of motor imagery brain-computer interface (MI-BCI), traditional electroencephalogram (EEG) signal recognition
algorithms appear to be inefficient in extracting EEG signal features and improving classification accuracy. In this paper, we
discuss a solution to this problem based on a novel step-by-step method of feature extraction and pattern classification for
multiclass MI-EEG signals. First, the training data from all subjects is merged and enlarged through autoencoder to meet the need
for massive amounts of data while reducing the bad effect on signal recognition because of randomness, instability, and individual
variability of EEG data. Second, an end-to-end sharing structure with attention-based time-incremental shallow convolution
neural network is proposed. Shallow convolution neural network (SCNN) and bidirectional long short-term memory (BiLSTM)
network are used to extract frequency-spatial domain features and time-series features of EEG signals, respectively. en, the
attention model is introduced into the feature fusion layer to dynamically weight these extracted temporal-frequency-spatial
domain features, which greatly contributes to the reduction of feature redundancy and the improvement of classification accuracy.
At last, validation tests using BCI Competition IV 2a data sets show that classification accuracy and kappa coefficient have reached
82.7 ± 5.57% and 0.78 ± 0.074, which can strongly prove its advantages in improving classification accuracy and reducing in-
dividual difference among different subjects from the same network.
1. Introduction
e brain-computer interface (BCI) is a communication
control system established between the brain and the ex-
ternal devices through the signals generated by brain ac-
tivity. Creating direct communication between the brain and
the external device, the system does not rely on muscles or
peripheral nerves but the central nervous system [1]. Motor
imagery (MI) is a psychological process in which an indi-
vidual simulates the body movements. During the process of
performing different MI tasks, when a certain area of the
cerebral cortex is activated, the metabolism and blood flow
of this area increase. Meanwhile, a simultaneous informa-
tion processing will lead to an amplitude decrease or even
block of EEG in its mu and beta spectrum oscillation. is
electrophysiologic concept is called event-related desynch-
ronization (ERD). In contrast, the phenomenon of a
manifest amplitude increase of mu and beta oscillation,
which appears in resting or inert states, is called event-
related synchronization (ERS) [2].
e purpose of MI-BCI is to identify the imagined
movements by classifying the electroencephalogram (EEG)
characteristics of the brain, to control the external devices,
such as robots [3, 4]. On the one hand, MI-BCI can help
Hindawi
Computational Intelligence and Neuroscience
Volume 2021, Article ID 6613105, 16 pages
https://doi.org/10.1155/2021/6613105