Markov chain methods

This module implements functions required to estimate the partition function of an Ising model with Markov Chain Monte Carlo methods. .

To compute the partition function, we resort to the telescopic product:

\[Z_{\beta_{k}}=Z_{\beta_{0}}\frac{Z_{\beta_{1}}}{Z_{\beta_{0}}}\frac{Z_{\beta_{2}}}{Z_{\beta_{1}}}\cdots\frac{Z_{\beta_{k}}}{Z_{\beta_{k-1}}}\]

Where \(\beta_0=0\) and the choice of the \(\beta_i\) is the annealing schedule. To estimate each ratio we can resort to the fact that the expectation value of the random variable

\[\textrm{exp}(-(\beta_{i+1}-\beta_{i})H(x))\]

with respect to the Gibbs state \(e^{-\beta_{i}H}/Z_{\beta_{i}}\) is given by the ratio \(Z_{\beta_{i+1}}/Z_{\beta_{i}}\). By picking the \(\beta_{i}\) close enough to each other, we can reliably estimate the ration by sampling from that Gibbs state and computing the empirical mean.

A huge amount of literature is dedicated to how to sample from the underlying Gibbs states as efficiently as possible. The same can be said about how to pick the annealing strategy so that the number of different \(\beta_{i}\) is as small as possible.

The first version of know your limits only implements the most basic version of these two subroutines (sampling from the Gibbs state and the annealing schedule).

For the sampling we use the Metropolis/Glauber dynamics algorithm. And for the annealing we a uniform schedule. That is, the \(\beta_{i}\) are always incremented by the same amount. However, the user can also specify the annealing schedule, so the code can easily handle more sophisticated schedules the user wants to implement. In future versions we plan to implement more advanced sampling routines and annealing schedules.

Also note that the Markov chain method is only (provably) efficient for \(\beta_{k}=\mathcal{O}(\|A\|^{-1})\). For that range, a number of Metropolis steps of order \(\mathcal{O}(n\log(n))\) should suffice to obtain a sample that is close to the Gibbs state. See e.g. the book Markov Chains and Mixing Times for a thorough discussion of all the topics discussed above.

mcmc.telescopic_product_external(A, b, schedule, samples, burn_in, verbose=1)

Uses the telescopic product approach to compute the partition function of an Ising model for a list of inverse temperatures. See the page of the Monte Carlo methods for an explanation of what this means exactly.

Parameters
  • A (scipy sparse matrix) – the (weighted) adjacency matrix of the Ising model whose energy we wish to minimize.

  • b (vector) – the external fields.

  • schedule (array of floats) – an array containing the annealing schedule. That is, a list of inverse temperatures whose partition function we wish to estimate.

  • samples (integer) – the number of samples that we will take for the estimation at every value of the inverse temperature.

  • burn_in (integer) – how many steps of the Metropolis Markov chain we will take to converge approximately to the Gibbs state distribution and start estimating the ratio of the partition function.

Returns

array containing the ratios of the partition functions in the schedule. That is, we compute \(\frac{Z_{\beta_{i+1}}}{Z_{\beta_{i}}}\) for consecutive entries \(\beta_i,\beta_{i+1}\) of the annealing schedule.

Return type

estimates_ratio ([array of floats])

mcmc.update_external(current, beta, A, b, current_energy)

Implements one step of the Metropolis Markov chain algorithm to sample from a Gibbs state of a classical Ising model.

Parameters
  • current (array with +/-1 entries) – initial state of the chain.

  • beta (float) – inverse temperature of the Gibbs state we wish to sample from.

  • A (scipy sparse matrix) – the (weighted) adjacency matrix of the Ising model whose energy we wish to minimize.

  • b (vector) – the external fields.

  • currrent_energy (float) – energy of the initial state current with respect to the model.

Returns

a new candidate state for the spin system and its corresponding energy.

Return type

[current,new_energy] ([array with +/-1 entries, float])