Parallel and Distributed Computing Issues in Pricing Financial Derivatives through Quasi Monte Carlo Ashok Srinivasan Department of Computer Science, Florida State University, Tallahassee FL 32306, USA, Email: asriniva@cs.fsu.edu Abstract Monte Carlo (MC) techniques are often used to price complex financial derivatives. The computational effort can be substantial when high accuracy is required. However, MC computations are latency tolerant, and are thus easy parallelize even with high communication overheads, such as in a distributed computing environment. A drawback of MC is its relatively slow convergence rate, which can be overcome through the use of Quasi Monte Carlo (QMC) techniques, which use low discrepancy sequences. We dis- cuss the issues that arise in parallelizing QMC, especially in a heterogeneous computing environment, and present re- sults of empirical studies on arithmetic Asian options, us- ing three parallel QMC techniques that have recently been proposed. We expect the conclusions to be valid for other applications too. 1. Introduction A financial derivative is a function of some basic vari- ables, such as stock price [4]. Derivatives are an important mechanism to control financial risk, and the volume of trade in derivatives is enormous. The price of a derivative is de- termined by first choosing a suitable mathematical model, and then solving it. Only the simplest models have ana- lytical solutions, and thus numerical techniques are often required. We will use the “Arithmetic Asian option”, to be defined in section 2, as an example throughout this paper, to illustrate the issues that arise in parallelization. Monte Carlo (MC) techniques are often used to solve complex options. If high accuracy is desired, then the computational effort can be tremendous for complex mod- els, and thus there has been recent interest in paralleliza- tion [7, 9, 13], both with MC and non-MC techniques. MC techniques can be parallelized by executing identical se- quential algorithms on each processor, independently, with just the pseudo-random number sequence being different, and the results from each processor finally being combined. Though implementation details require that we have occa- sional communication, the speed-up is essentially linear, leading to the common perception that MC is “embarrass- ingly parallel”. However, popular techniques of paralleliz- ing pseudorandom number generators (PRNG) can lead to a situation where the efficiency in terms of time is , however, the efficiency in terms of error-reduction is much lower. In fact, the results can even be wrong, as demonstrated in [12] and several other studies. An important drawback of MC is that its convergence rate of with samples, is not sufficiently good for many applications. This can be overcome through the use of Quasi Monte Carlo (QMC) techniques, which use low discrepancy sequences (LDS). Unlike pseudo-random sequences used in MC, which try to produce a sequence with the statistical properties of a random sequence, LDSs try to ensure uniformity, rather than randomness. A deter- ministic error bound of in dimensional integration can be obtained from the Koksma-Hlawka in- equality. When the effective dimensionality of the problem is not very high, QMC can converge faster than MC, and several studies, such as [5], have demonstrated its advan- tages over MC in a variety of financial applications. The issues that arise in parallelizing QMC are quite different from those that arise in parallelizing MC, primarily due to the fact that low discrepancy sequences are inherently de- terministic, and there has been interest in recent times in parallelizing QMC [1, 9, 10, 11]. We will discuss the issues that arise in parallel QMC, and three techniques that have recently been proposed for it, in section 3. We demonstrate the different issues that arise in QMC parallelization, by performing empirical tests on the Arith- metic Asian option, using three different QMC paralleliza- tion techniques under different scenarios. The results are presented in section 4. The different techniques have differ- ent characteristics, and we conclude by summarizing them, and potential improvements to the techniques, in section 5. Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS02) 1530-2075/02 $17.00 ' 2002 IEEE