IEEE COMMUNICATIONS LETTERS, VOL. 17, NO. 4, APRIL 2013 741 Optimizing Chien Search Usage in the BCH Decoder for High Error Rate Transmission Ramy F. Taki El-Din, Rabab M. El-Hassani, and Salwa H. El-Ramly, Senior Member, IEEE Abstract—In hybrid automatic repeat request (HARQ), Bose- Chaudhuri-Hocquenghem (BCH) coders can be used before transmission over noisy channel. A sent message is retrieved correctly via decoding, whenever it is correctable. For uncor- rectable message, a retransmission is requested by the receiver. In this paper, the detection time for uncorrectable words is reduced. In particular, “Chien” search usage is optimized. It is only used when all roots of the error locator polynomial belong to F * 2 m = GF (2 m )\{0}. Two binary primitive narrow sense BCH codes are considered; the short length BCH(63,39,9) and the long length BCH(16383,16215,25) codes. Index Terms—Chien search, error correction, error detection, hybrid automatic repeat request. I. I NTRODUCTION F OR errors detection, redundant bits are added before transmission. Using an error detection technique as cyclic redundancy check, the receiver requests retransmission for erroneous words. In poor channels, frequent retransmissions is a major disadvantage. For errors correction, the message is encoded to a valid codeword before transmission. Codewords over a noisy channel are subjected to errors. Using an error correction technique as BCH, the received word is corrected to the most likely codeword. Error correction techniques suffer a complexity disadvantage when being used on high error rate channel. Actually, the higher error-correcting-capability t, the greater encoder/decoder complexity [1]. Hybrid automatic repeat request (HARQ) permits retrans- missions if attempts at error correction failed. A message of length k bits is encoded to a valid codeword before transmission. BCH decoder attempts to correct all errors that may occur, but a retransmission is requested if a fall-back occurs. Over noisy channel, the number of retransmissions might typically be kept low by using a sufficiently stronger code. However, strong code may lead to some drawbacks: • Higher error correction capability code results in greater encoder/decoder complexity and delay. In fact, the code correction capability is not only determined by the channel environment, but also by the encoder/decoder design restrictions. For reducing the decoder’s hardware complexity, reader may refer to [2] and [3]. Manuscript received November 25, 2012. The associate editor coordinating the review of this letter and approving it for publication was M. Lentmaier. R. F. Taki El-Din and R. M. El-Hassani are with the Department of Engineering Physics and Mathematics, Ain Shams University, Cairo, Egypt (e-mail: ramyfarouk@hotmail.com, rabab elhassani@eng.asu.edu.eg). S. El-Ramly is with the Department of Electronics & Commu- nications Engineering, Ain Shams University, Cairo, Egypt (e-mail: salwa elramly@eng.asu.edu.eg). Digital Object Identifier 10.1109/LCOMM.2013.022213.122651 • Higher correction capability t requires increasing the overhead; the ratio of the number of redundant bits to the number of information bits. Preventing frequent retransmissions is the main advantage of a strong code. However, with low complexity requirement, an adequate correction capability code with fairly many re- transmissions is sometimes preferable, see Example 1. Example 1: Consider transmission over AWGN channel with bit error probability p. A binary symmetric channel (BSC) with crossover probability p is also an appropriate model. Let C 1 and C 2 be two codes with codeword block length n = 63 bits. Correction capabilities t = 5 and 7 require n - k = 27 and 39 redundant bits for C 1 and C 2 , respectively. Let P 1 and P 2 be the probabilities of receiving uncorrectable word using C 1 and C 2 , respectively. Therefore, P 1 = 63 i=6 63 i p i (1 - p) 63-i &P 2 = 63 i=8 63 i p i (1 - p) 63-i . For each message, the expected number of redundant bits E 1 and E 2 are given by E j = P 3 j (63)(3) + 3 μ=1 (1 - P j )P μ-1 j (63μ - k j ); j =1, 2 (1) where μ is the number of transmissions until a successful reception occurs, with a maximum of three trials. If 0 <p ≤ 0.05, then 27 <E 1 ≤ 33.5 and E 2 ≃ 39.5 bits. Specifically, assume p =0.057, we get P 1 =0.1489,P 2 = 0.0263 and E 1 = 37.89 <E 2 = 40.7 bits. Hence, C 1 that requires retransmission 15% of the time is more preferable than C 2 that requires retransmission 3% of the time. Moreover, the encoder/decoder complexity for C 1 is less than C 2 . Most communication systems are designed to tolerate the worst bit error rate by using a sufficiently strong code. However, some exceptional cases may arise: • In wireless transmission, the conditions for reception may vary sharply due to mobility and interferences. For instance, in Multiple Input Multiple Output sys- tems, interference is highly dependent on transmission parameters of interfering transmitters which are generally unknown [4]. In [5], WiFi based long distance networks reported loss rate between 4-70 % for urban areas. • Extreme wireless network environments that may suffer high loss rates, up to 50% [6]. For instance, airborne data links experience high variation in quality due to mobility, weather and other effects that cause high loss rate environments. • Fairly good code allowing more frequent retransmissions may be preferable, see Example 1. 1089-7798/13$31.00 c 2013 IEEE