HIGHER ORDER CUMULANTS OF RANDOM VECTORS, DIFFERENTIAL OPERATORS, AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES S. RAO JAMMALAMADAKA, T. SUBBA RAO, AND GY ¨ ORGY TERDIK Abstract. This paper provides a unified and comprehensive approach that is useful in deriving expres- sions for higher order cumulants of random vectors. The use of this methodology is then illustrated in 3 diverse and novel contexts, namely: (i) in obtaining a lower bound (Bhattacharya bound) for the variance- covariance matrix of a vector of unbiased estimators where the density depends on several parameters, (ii) in studying the asymptotic theory of multivariable statistics when the population is not necessarily Gaussian and finally, (iii) in the study of multivariate nonlinear time series models and in obtaining higher order cumulant spectra. The approach depends on expanding the characteristic functions and cumulant generating functions in terms of the Kronecker products of differential operators. Using Tensor calculus, McCullagh and Speed have obtained similar results for cumulants of random vectors, but our objective here is to derive such expressions using only elementary calculus of several variables and also to highlight some important applications in statistics. 1. Introduction and Review It is well known that cumulants of order greater than two are zero for random variables which are Gaussian. In view of this, higher order cumulants are often used in testing for Gaussianity and multivariate Gaussianity as well as to prove classical limit theorems. These are also used in asymptotic theory of statistics, such as in Edgeworth series expansions. Consider a scalar random variable X and let us assume that all its moments, μ j = E(X j ),j =1, 2,..., exist. Let the characteristic function of X be denoted by ϕ X (λ) and it then has the series expansion given by (1.1) ϕ X (λ)= E(e iλX )=1+ ∞ j=1 μ j (iλ) j j ! ,λ ∈ R. From (1.1), we observe (−i) j d j ϕ(λ)/dλ j λ=0 = μ j . In other words, the j th derivative of the Taylor series expansion of ϕ X (λ) evaluated at λ = 0 gives the j th moment. The “cumulant generating function,” ψ X (λ) is defined as (see eg. Leonov and Shiryaev(1959)) (1.2) ψ X (λ) = ln ϕ X (λ)= ∞ j=1 κ j (iλ) j j ! , where κ j is called as the j th cumulant of the random variable X. As before, we see κ j =(−i) j d j ψ X (λ)/dλ j λ=0 . Comparing (1.1) and (1.2), one can write the cumulants in terms of moments and vice versa. For example, κ 1 = μ 1, κ 2 = μ 2 − (μ 1 ) 2 etc.. Now suppose the random variable X is normal with mean μ and variance σ 2 , then we know ϕ X (λ) = exp(iλμ − λ 2 σ 2 /2) which implies κ j = 0 for all j ≥ 3 . We now consider 1991 Mathematics Subject Classification. Primary 62E17, 62E20; Secondary 62H10, 60E05. Key words and phrases. Cumulants for multivariate variables, Cumulants for likelihood functions, Bhattacharya lower bound, Taylor series expansion, Multivariate time series . This research of Gy. Terdik is partially supported by the Hungarian NSF OTKA No. T 032658, and the NATO fellowship 3008/02. This paper is in final form and no version of it will be submitted for publication elsewhere. 1