Machine Learning Scaled Belief Propagation for Short Codes Matthias Hummert, Dirk W¨ ubben and Armin Dekorsy Department of Communications Engineering University of Bremen, 28359 Bremen, Germany Email: {hummert, wuebben, dekorsy}@ant.uni-bremen.de Abstract—The problem of finding good error correcting codes for short block lenghts and its corresponding decoders is an open research topic. A frequently applied soft decoder is the Belief Propagation (BP) decoder, however with degraded performance in case of short loops in the Tanner graph. This is especially problematic for short length codes as loops of small length are more likely to occur. In this paper, we propose the Machine Learning Scaled Belief Propagation (MLS-BP) to mitigate the performance loss of BP decoding for short length codes by introducing a learned scaling factor for the receive signals. The key point of this approach is the fact that the implementation of the BP decoder is not changed and the simple scaling leads to performance results comparable to other proposed BP improvements. Index Terms—Supervised Machine Learning, Belief Propaga- tion, error-correcting codes, short block length I. I NTRODUCTION In a time where short packet transmissions are getting more and more important, especially for 5G, the research in finding good error correcting codes and corresponding decoders for these data transmissions is of huge importance. In general, the performance of channel codes improves with the block length, therefore finding good performing codes for short block length is not an easy task. Furthermore, optimum maximum likeli- hood decoding is too complex for most practical scenarios. The Belief Propagation (BP) decoder is a frequently applied soft decoder in particular for Low-Density-Parity-Check (LDPC) codes [1]. This decoder has shown to perform well for long block length, but is in general suboptimal due to loops in the Tanner graph. These loops harm the decoding performance as the reliability of the messages is overestimated due to the inherent dependencies in the code. Therefore, the question arises, if the BP decoder can be modified for decoding short codes. As reported in [2], Tanner already proposed to scale the messages in each iteration to compensate for the loops in the decoder. For minimizing the BER, [3] proposed to use bruteforce search to find scaling factors in the check to variable messages of the BP and [4] exploited the consistency condition of Log-Likelihood-Ratios (LLRs) to scale the messages. In order to avoid this bruteforce search or complicated analytical This work was partly funded by the German ministry of education and research (BMBF) under grant 16KIS1180K (FunKI). analyis, a machine learning (ML) search has been proposed to find these scaling factors in [5]. Recently, the idea of learning general complete neural net- works (NN) for soft decoding has raised significant attention [6]. However this approach only works for small dimension of codes since the procedure of learning a new soft input decoder by using neural networks is doomed by the curse of dimen- sionality. In order to overcome this drawback several strategies have been proposed e.g. [7]. For sequential decoding numerous approaches exists to learn the decoding of convolutional codes and Turbo codes [8]. Another approach is to interprete the BP decoder as a NN [9], [10]. The so-called Neural Belief Propagation (N-BP) decoder will be discussed in Subsection II-D and used as the benchmark. In this paper, we propose an alternative approach by in- troducing a scaling factor for the calculation of the LLR of the receive signals. The key aspect is that the subsequent BP decoder remains unchanged, i.e., no additional scaling factors are incorporated in the passed messaged, only the input of the decoder is changed. This is especially interesting for hardware implementation purposes as an existing implementation of the BP decoder can be used without any modification of the decoder itself. The proposed approach can hence easily be integrated in existing schemes. A supervised learning procedure is introduced to adapt this proposed input scaling which is trained offline without adding extra computations to the online processing. The paper is structured as follows: after discussing the system model in Section II, the Machine Learning Scaled Belief Propagaion (MLS-BP) and the training procedure are presented in Section III. In Section IV the performance is investigated and the conclusions are provided in Section V. II. PRELIMINARIES A. System model L ch L(y|x) w L(x) u c x y ENC BPSK BP Fig. 1: System model of coded BPSK transmission over AWGN channel and LLR calculation prior to BP decoding