HYBRID LATTICE REDUCTION ALGORITHM AND ITS IMPLEMENTATION ON
AN SDR BASEBAND PROCESSOR FOR LTE
Ubaid Ahmad, Min Li, Sofie Pollin, Amir Amin, Liesbet Van der Perre, Rudy Lauwereins
Interuniversity Micro-Electronics Center (IMEC) vzw
Kapeldreef-75, Leuven, B-3001, Belgium
email: {ubaid, limin, pollins, aminamir, vdperre, lauwerei}@imec.be
ABSTRACT
Lattice Reduction (LR) is a promising technique to improve
the performance of linear MIMO detectors. This paper
proposes a Hybrid LR algorithm (HLR), which is a scal-
able LR algorithm. HLR is specifically designed and op-
timised to exploit ILP and DLP features offered by paral-
lel programmable baseband architectures. Abundant vector-
parallelism in HLR is enabled with highly-regular and deter-
ministic data-flow. Hence, HLR can be easily parallelized
and efficiently mapped on Software Defined Radio (SDR)
baseband architectures. HLR can be adapted to operate in
two different modes to achieve the best performance/cycle
trade-off, which is highly desirable for SDR baseband pro-
cessing. The proposed algorithm has been evaluated in the
context of 3GPP-LTE and implemented on ADRES which
is a Coarse Grain Reconfigurable Array (CGRA) processor.
Most of the previously reported implementations of LR al-
gorithms are for ASIC or FPGA. However, to the best of au-
thor’s knowledge, this is the first reported LR algorithm ex-
plicitly designed and optimized, to have a scalable and adap-
tive implementation for a CGRA processor like ADRES. The
reported implementation of HLR can achieve gains of up to
12 dB compared to ZF for MIMO detection.
1. INTRODUCTION
The optimal solution to the MIMO detection problem is the
Maximum Likelihood (ML) detector. A brute force ML de-
tector implementation requires exhaustive search over all the
possible transmitted symbols so its complexity increases ex-
ponentially with the number of antennas and the signal con-
stellation. The challenge is to have MIMO detectors that can
achieve performance comparable to the ML detector while
having a lower complexity. Linear Multiple Input Multiple
Output (MIMO) detectors, such as Zero Forcing (ZF) or Min-
imum Mean Square Error (MMSE), are attractive choices for
MIMO detection due to their low computational cost. How-
ever, they cannot efficiently remove the inter-stream interfer-
ence and suffer from noise amplification. Lattice Reduction
(LR) has been proposed [1] to improve the performance of a
sub-optimal detector for MIMO systems. Linear transforma-
tions on the MIMO channel matrix are performed to make
it more orthogonal. As a result, for a given MIMO detector,
the multiple received streams can be correctly detected with
a higher probability. LR-based linear ZF/MMSE detectors
have been proposed in [1] [2]. A well known technique to
compute the reduced lattice basis is the LLL algorithm [3].
Complex LLL (CLLL) algorithm has been proposed [4] as a
variant of LLL algorithm for MIMO processing.
In the context of implementation, majority of the exist-
ing LR algorithms are not designed for efficient mapping
on parallel programmable architectures. Implementations of
these algorithms reported for ASIC [5] and FPGA [6] [7]
[8] are essentially sequential and non-deterministic. These
algorithms are not suited for parallelism because of irregu-
lar data-flow, non-deterministic control flow, and extensive
memory-shuffling. These drawbacks will result in very low
resource-utilization of parallel programmable architectures.
Besides this, they can not be adapted to achieve various per-
formance complexity trade-offs, which is required to uti-
lize the potential offered by an SDR. Majority of the exit-
ing work on lattice reduction algorithms aim at improving
performance while sacrificing computational complexity and
vice versa. However, an adaptive LR algorithm is required
for SDR.
The main contribution of this paper is a Hybrid LR
(HLR) algorithm. In the proposed algorithm we introduce
enhancements to our previously reported work [9] to make
it adaptive for implementation on an SDR baseband proces-
sor so that performance/cycle trade-offs can be made in an
efficient manner. The algorithm is designed such that compu-
tationally expensive parts of LR can be executed simultane-
ously. Hence, LR can be performed on a block of sub-carriers
in parallel. This improves the processing throughput sig-
nificantly while making the algorithm suitable for a parallel
implementation. On the other hand, deterministic data-flow
can account for DLP. In addition, HLR offers both design-
time as well as run-time scalability. At the design time,
HLR can be initialized for DLP offered by a parallel pro-
grammable architecture while run-time scalability provides
performance/complexity trade-off [9]. The algorithm is op-
timised to have an implementation that can adapt between
two different modes to provide the best possible performance
while consuming minimum cycles. Thus, our algorithm can
operate in different scalability modes in an adaptive fash-
ion. For performance evaluation, HLR is implemented on
ADRES [10] which is a CGRA processor for LTE baseband
processing.
The remainder of this paper is organized as follows: re-
maining of this section describes the system model and LR-
aided MIMO detection. The HLR algorithm is proposed in
Section 2, while Section 3 details the implementation of our
proposed algorithm on ADRES. Experimental results are re-
ported in Section 4. Afterwards, conclusions are drawn in
Section 5.
1.1 Lattice Reduction-aided MIMO Detection
Consider a spatially multiplexed MIMO system with M
transmit and N receive antennas denoted as M × N. The vec-
19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011
© EURASIP, 2011 - ISSN 2076-1465 91