Time-Dependent Polynomial Chaos Marc Gerritsma ∗ , Peter Vos † and Jan-Bart van der Steen ∗ ∗ Dept. Aerospace Engineering, TU Delft, Kluyverweg 1, 2629 HS Delft, The Netherlands † Dept. Aeronautics, Imperial College, South Kensington Campus, London SW7 2AZ, UK Abstract. Conventional generalized polynomial chaos is known to fail for long time integration, loosing its optimal con- vergence behaviour and developing unacceptable error levels. The reason for this loss of convergence is the assumption that the probability density function is constant in time. By allowing a probability density function to evolve in time the optimal properties of polynomial chaos are retrieved without resorting to high polynomial degrees. This time-dependent approach is applied to a system of coupled non-linear differential equations. These results are compared to the conventional generalized polynomial chaos solutions and Monte Carlo simulations. Keywords: Uncertainty, stochastic ODE, Kraichnan-Orszag problem polynomial chaos, long-term integration PACS: 05.00.00 GENERALIZED POLYNOMIAL CHAOS Generalized polynomial chaos (gPC) is employed to represent stochastic processes. Stochastic processes can be seen as processes involving some form of randomness. They can be represented by a stochastic mathematical model, often expressed in terms of stochastic differential equations. Stochastic mathematical models are based on a probability space (Ω, F , P) where Ω is the sample space, F ⊂ 2 Ω its -algebra of events, and P its probability measure. In addition considering some physical domain D ⊂ R d × T (d = 1, 2, 3), which can be a combination of spatial and temporal dimensions, a stochastic process can be seen as a scalar- or vector-valued function u(x, t , ) : D × Ω → R b where x is an element of the physical space, t denotes the time and is a point in the sample space Ω. Furthermore, because of the infinite-dimensional nature of the probability space, we discretize this space by characterizing it by a finite number of random variables { j ( )} N j=1 , N ∈ N. This can be seen as assigning a finite number of coordinates { j } N j=1 to the probability space reducing it to a finite dimensional space Λ ⊂ R N . Consequently, the stochastic process u becomes a mapping u(x, t , ) : D × Λ → R b . It is important to note that in this work, we assume that the occurring stochastic processes are already characterized by a known set of random variables. Wiener was the first to represent stochastic processes by orthogonal polynomial expansions [1]. To accomplish this, he used Hermite polynomials in terms of Gaussian random variables to represent Gaussian processes, which is referred to as homogeneous chaos. In this way, the stochastic process is represented in the form: u(x, t , ( )) = ∞ ∑ i=0 u i (x, t )H i (( )) (1) in which H i are Hermite polynomials and is a vector of Gaussian random variables with zero mean and unit variance. It is a spectral expansion in the random dimensions employing deterministic coefficients. According to the Cameron- Martin theorem [2], for a fixed value of x and t , this expansion converges to any L 2 (Ω) functional in the L 2 (Ω) sense. This implies that the application of polynomial chaos is restricted to those stochastic processes yielding ∈Ω |u(x, t , )| 2 d P( ) < ∞ (2) As a result, polynomial chaos is restricted to second-order stochastic processes, i.e. processes with finite second-order moments. These are processes with finite variance, and this applies to most physical processes. Although Wiener’s original polynomial chaos expansion converges to any second-order stochastic process, it is most suitable to represent Gaussian processes, due to the random variable’s Gaussian nature, yielding a fast convergence.