The corresponding notebooks are here: github repository and docker
Physical systems are often described by a very large number of degrees of freedom.
Obtaining information, such as the partition function, in principle requires to compute a sum over all of them.
In most cases, this is very difficult! Example: Ising model with $N$ spins, the sum has $2^N$ terms.
Monte Carlo methods allow to circumvent this difficulty using a stochastic approach.
What are the chances to win a Solitaire game?
Rather than trying to solve the complex combinatorial problem, Ulam thought he could play 100 times and have an idea of the result from the statistics.
With the appearance of first computers this gave birth to the first Monte Carlo algorithms.
Underlying principle in Monte Carlo methods: In order to gain information about a system that depends on many degrees of freedom, it is enough to inspect a subset of cleverly chosen representatives.
These representatives are generated by a stochastic process.
Monte Carlo methods are now widely used in modern computational physics. They exist in many different flavours.
We will be interested in Monte Carlo integration
n_pebbles = 2**12
pebbles = np.random.uniform(-1, 1, size=(2, n_pebbles))
n_inside = np.sum(np.linalg.norm(pebbles, axis=0) < 1)
print(f"inside: {n_inside}, total: {n_pebbles}, π ≃ {4*n_inside/n_pebbles}")
inside: 3244, total: 4096, π ≃ 3.16796875
Throwing the $n^\mathrm{th}$ pebble is described by a random variable
\begin{equation*} x_n = \begin{cases} 4 \quad \text{with probability} \quad \pi/4 \\ 0 \quad \text{with probability} \quad (1-\pi/4) \end{cases} \end{equation*}
The average of $x_n$ is
\begin{equation*} \langle x_n \rangle = \frac{\pi}{4} \cdot 4 + \left(1-\frac{\pi}{4}\right) \cdot 0 = \pi \end{equation*}
We have traded an integration for the computation of an average value.
Can a similar strategy be used to compute integrals of multivariable functions?
How does this approach compare to standard quadrature methods?
How should the stochastic sampling be done?