International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169 ______________________________________________________________________________________ _______________________________________________________________________________________ A Novel Digital Signature based on Error Correcting Codes Younes Bayane, Fatima Amounas and Lahcen El Bermi Computer Sciences Department, Moulay Ismaïl University, Faculty of Sciences and Technics, Errachidia, Morocco Bayane.younes@gmail.com, f_amounas@yahoo.fr, elbermi.lahcen@gmail.com Abstract— A digital signature is a cryptographic primitive for ensuring the authenticity of digital documents. A valid digital signature allows checking that a message was created by a known sender (authentication), that the sender cannot deny having sent the message (nonrepudiation), and that the message was not altered in transit (integrity).The idea of constructing practical signatures based on error correcting codes was introduced by Courtois et al in [1]. The main goal is to make digital signature for which the security is based on decoding syndrome problem. In this paper, a new construction of digital signature is considered which is an extension of the error correcting code construction. The proposed method consists of reordering the message bits to get a decodable word. Then apply an efficient decoding algorithm to get signature. Keywords- Cryptography , Error correcting code, McEliece Cryptosystem, Niederreiter Cryptosystem, syndrome decoding problem, Digital signature,CFS signature. I. INTRODUCTION Arguably today's asymmetric cryptographic algorithms are all based on the hardness of the integer factorization problem and the discrete logarithm problem. Up to now, no efficient algorithms for solving these problems are known using today's computers. However, this picture changes drastically if quantum computers are considered. In 1994, Shor proposed in [2] a quantum algorithm that can solve both the integer factorization problem and the discrete logarithm problem in polynomial time on a quantum computer. In order to find an alternative to the threatened schemes, the Post Quantum Cryptography emerged recently and has received increased attention in the last years, especially after 2016 when the NIST began to standardize it. Nowadays, there are many categories of problems that are studied for post quantum cryptography. One of those problems that are considered well-understood is the cryptography based on error correcting codes. Since devise of the first cryptosystem based on error correcting codes in 1978 by McEliece and its dual variant in 1986 by Niederreiter, many cryptographic primitives have been implemented based on the same ideas, especially after both cryptosystems have shown a high level of security in [3]. Thus, in 1990, Xinmei Wang proposed the first code-based signature scheme based on error correcting codes in [4]. The signature is generated in the same way as the plaintext is encrypted in the Rao-Nam scheme in [5]. Unfortunately, it was proved unsecure shortly after in [6]. Several signature constructions based on Goppa Code Distinguishing problem were subsequently designed, this is outlined below. II. BACKGROUND In 2001, Courtois, Finiasz and Sendrier published the first practical digital signature based on error correcting codes theory. They adopted the idea of the Niederreiter cryptosystem for this purpose. This means that having a linear code with an efficient decoding algorithm whose parity check matrix H is a m*n matrix, there is a way to find for any binary vector s of length r called a syndrome, a word of smallest Hamming weight x of length n such that Hx T = s T . To sign a message, one has to use a hash function h to produce a binary string of length r. The decoding algorithm of the parity check matrix H is then applied to get a word x of smallest Hamming weight such that Hx T = h(m) T . The signature of the message m is then the word x. According to the authors, the signature can be made using Goppa codes of a high rate. They also proved the security of their scheme relying on two problems assumed to be hard, namely: the syndrome decoding problem and the distinguishability of a binary Goppa code from a random code. In 2009, it was realized that the original parameters can be attacked by considered attack of Daniel Bleichenbacher never published. Subsequently, another variation called Parallel-CFS was presented in [7], which avoids the attack of Bleichenbacher. The new scheme has the same advantages as the original CFS, but it suffers from two drawbacks, namely: (i) it has no consistent proof from point of view of distinguishability, taking in consideration the distinguisher of high Goppa codes presented in [8] and (ii) need of large keys to get a good security parameters with a reasonable signature cost.Other schemes were proposed. So Kabatianskii, Krouk, and Smeets presented in [9] and [10] the KKS signature scheme based on arbitrary linear error- correcting codes. Actually, they proposed three versions which share the same principle: The signature is a code word of a linear code; but use different linear codes. There are also some attempts to change the original strategy by using other code families. So in [11] Low Density Generator Matrix codes (LDGM) were adopted. Low Rank Parity Check codes (LRPC) were used in [12]. Convolutional codes in [13] and more recently quasi-cyclic codes in [14]. Due to attack described in [15] on the McEliece Cryptosystem based on convolutional codes, there are some doubts about the consistency of the scheme described in [13], but up to now, there is no consistent proof. __________________________________________________*****_________________________________________________ Volume: 7 Issue: 3 25 - 28 IJRITCC | March 2019, Available @ http://www.ijritcc.org