Lower Bounds for Divergence in Central Limit
Theorem
Peter Harremo¨es
1
Department of Mathematics, University of Copenhagen
Abstract
A method for finding asymptotic lower bounds on information divergence is devel-
oped and used to determine the rate of convergence in the Central Limit Theorem.
Keywords: Central Limit Theorem, cumulant, Hermite polynomial, information
divergence, kurtosis, maximum entropy, rate of convergence, skewness.
1 Introduction
Recently Oliver Johnson and Andrew Barron [JB01] proved that the rate
of convergence in the information theoretic Central Limit Theorem is upper
bounded by
c
n
under suitable conditions for some constant c. In general if
r
0
> 2 is the smallest number such that the r’th moment does not vanish then
a lower bound on total variation is
c
n
r
0
2
−1
for some constant c. Using Pinsker’s
inequality this gives a lower bound on information divergence of order
1
n
r
0
−2
.
In this paper more explicit lower bounds are computed. The idea is simple and
follows general ideas related to the maximum entropy principle as described
1
Supported by a Post. Doc. fellowship by the Villum Kann Rasmussen Foundation and
by grants from Danish Natural Science Counsil and INTAS (project 00-738). This work
was mainly done during a stay at ZIF, Bielefeld.
Electronic Notes in Discrete Mathematics 21 (2005) 309–313
1571-0653/$ – see front matter © 2005 Elsevier B.V. All rights reserved.
www.elsevier.com/locate/endm
doi:10.1016/j.endm.2005.07.076