I
n the long run, the stock market always goes up—or so
we’re told. Thus, a sure way to increase your wealth is
to buy some shares of everything sold on the New York
Stock Exchange. But wait a minute, you say—that’s not
practical, and it would cost way too much money. Isn’t there
some way to buy a sample of shares in a way that guarantees
it goes up along with the market? Well, yes, actually, there
is—you could buy mutual funds. Of course, there’s a risk
that the fund you buy won’t do a good job of sampling and
will go down instead of up, but that probably won’t happen
with a well-established fund that uses a good sampling al-
gorithm, one that really does follow the market.
Monte Carlo methods embody this sampling philosophy
in a set of remarkably versatile algorithms that prove them-
selves useful when other methods aren’t practical for solv-
ing difficult numerical problems. When well-designed, they
can tell you a lot about what’s going on without forcing you
to look at every possibility. We focus in this case study on
three uses of Monte Carlo methods: for function mini-
mization, for discrete optimization, and for counting.
Function Minimization
A strictly convex function f(x) in the interval, a £ x £ b, at-
tains a minimum that we can find with a variety of methods,
including the many versions of Newton’s method, conjugate
gradients, and (if derivatives aren’t available) pattern search
algorithms. For nonconvex functions, such as that in Figure
1, these algorithms find a local minimizer such as x = 0.4 but
aren’t guaranteed to find the global minimizer x* = 1.8.
Minimization Using Monte Carlo Techniques
Monte Carlo methods provide a good means for generating
starting points for nonconvex optimization problems. In its
simplest form, a Monte Carlo method generates a random
sample of points in the function’s domain. We can then use
our favorite minimization algorithm starting from each of
these points and, among the minimizers found, report the
best one. By increasing the number of Monte Carlo points,
we increase the probability that we’ll find the global mini-
mizer. Algorithm 1 summarizes this method. Note that our
“favorite” minimization algorithm can be as simple as re-
porting the starting point.
ALGORITHM 1. MONTE CARLO MINIMIZATION
We want to minimize the function f (x) over the region a £
x £ b.
for each random point x
i
generated in the region,
Use an algorithm to approximate a local minimizer of f (x),
starting at x
i
, restricting the search to the region defined
by a and b.
If the resulting minimizer gives a lower function value
than all previous minimizers, remember it.
end
Just as more information helps in choosing a mutual fund,
this Monte Carlo method can be improved somewhat by us-
ing extra information about the function f (x) that we’re min-
imizing. Suppose we know a Lipschitz constant L for our
function, so that for all x and y in the domain,
|f (x) – f (y)| £ L||x – y||.
To make the example specific, let’s suppose L = 1 and that we
know f (1) = 2 and f (4) = 0. Because we know f (4) = 0, we
know the global minimizer gives a function value of at most
zero. Thus, we’re no longer interested in looking at intervals
that will have function values greater than zero. Our Lip-
schitz relation tells us that f can’t decrease faster than a
straight line with slope –1. In Figure 2, the blue shaded area
shows where the function f must lie. In the interval marked
72 Copublished by the IEEE CS and the AIP 1521-9615/07/$20.00 © 2007 IEEE COMPUTING IN SCIENCE & ENGINEERING
MONTE CARLO MINIMIZATION AND
COUNTING: ONE, TWO, …, TOO MANY
By Isabel Beichl, Dianne P. O’Leary, and Francis Sullivan
Editor: Dianne P. O’Leary, oleary@cs.umd.edu
Y OUR H OMEWORK A SSIGNMENT
Monte Carlo methods use sampling to produce approximate solutions to problems for which other methods
aren’t practical. In this homework assignment, we study three uses of Monte Carlo methods: for function
minimization, discrete optimization, and counting.