Working Group: Markov Processes and Mixing

Spring 2023

Logistical Information

Organizers Gautam Iyer
Location All talks are Tuesdays 2:00PM in WEH 7218, unless otherwise noted.
Current Schedule
Mailing List Please email Gautam Iyer to be added to the list.

Tentative schedule

Jan 24 Alan Frieze: Markov Chains for Counting, and Related Problems (Special time 2:30PM)
Jan 31 Wesley Pegden: Markov Chains for Sampling vs Markov Chains for Outlier detection
Feb 7 Wesley Pegden: Markov Chains for Sampling vs Markov Chains for Outlier detection (cont.)
Feb 14 Alan Frieze: Volume Computation: A brief overview
Feb 21 Martin Larsson: Minimixing asymptotic relative growth
Feb 28 Prasad Tetali: Modified Log-Sobolev Inequality (MLSI): theory, examples and consequence
Mar 7 (No meeting, Spring Break)
Mar 14 Konstantin Tikhomirov: Regularized modified log-Sobolev inequalities, and comparison of Markov chains.
Mar 21 Mykhaylo Shkolnikov: Shuffling Cards and Stopping Times
Mar 28 Mykhaylo Shkolnikov: Strong Stationary Times, and Intertwining
Apr 4 Ruiyu Han: Cutoff phenomenon for Markov Chains
Apr 11 Gautam Iyer: Using Jumps to Speed up Random Walks
Apr 18 Gautam Iyer: Using Jumps to Speed up Random Walks
Apr 25 Sangmin Park: Kinetic Equilibration Rates for Granular Media
May 2 Dejan Dejan Slepčev: Interacting particle methods for sampling


Here are a few topics we are considering for the semester in no particular order. If you’d like to give a lecture or two on one of these topics, browse through the reference and send me an email. Talks are all informal so not knowing every last detail about the subject is OK 😄.

If you have other things you like to think about, let me know and I can add them to the list.

Markov Chains for Counting (Alan)

  • How do you count perfect matchings in bipartite graph? A Markov chain approach, Cheeger constants (conductance), and other fun things. Reference: Notes on counting and mixing

Sampling and outlier detection (Wes)

Volume Computation: A brief overview (Alan)

Minimizing Asymptotic Relative Growth (Martin)

  • Ergodic models are rarely useful for describing financial data, but there are exceptions. I will discuss one such exception, which involves a problem of growth rate maximization under model uncertainty. In this problem, ergodic Langevin diffusions with a certain optimality property arise naturally. Reference: Kardas, Robertson

Modified Log-Sobolev Inequality (Prasad, Kostya)

  • Rate of entropy decay in finite (discrete space) Markov chains is better captured by a variant of the standard LSI. This brief overview will introduce the MLSI and mention some example applications - such as the transposition chain on permutations, and the matroid basis exchange walk - and a consequence to measure concentration. Time permitting, a general decomposition theorem for LSI, spectral gap and the MLSI will also be outlined, along with some more recent results. References: Bobkov Tetali, Jerrum, Son, Tetali, Vigoda, Herman, Salez Salez, Tikhomirov, Youssef

  • The ``regularized’‘ version of the Modified Log-Sobolev Inequality (MLSI) for log-Lipschitz functions allows using standard comparison techniques to get strong mixing time bounds for some Markov chains. References: Tikhomirov, Youssef

Strong Stationary Times (Misha)

  • How many times do you need to shuffle a deck of cards before it’s random? (And applications to random walks.) Reference: Aldous, Diaconis

  • Intertwining for diffusions.

Enhanced dissipation / Speeding up Random Walks by Mixing (Gautam)

  • Fix $N$ and look at the random walk $X_{n+1} = f(X_n) + ε_{n+1} \pmod{N}$, where $ε_n$’s are i.i.d.\ uniformly distributed on $\{\pm 1, 0\}$, and $f$ is a bijection on $\{0, \dots, N-1\}$. For “almost every” $f$, the mixing time is $O(\log N)$. Reference: Chatterjee, Diaconis

  • Look at $dX = u(X) \, dt + \sqrt{2\kappa} \, dW$ on the Torus, where the deterministic flow of $u$ is “sufficiently mixing”. Then the mixing time of $X$ is $O(\abs{\ln \kappa}^3)$. References: Feng, Iyer ‘19, Iyer, Zhou ‘22.

  • Consider an overdamped Langevin system $dX_t = -\nabla U(X) \, dt + \sqrt{2\kappa} dW_t$ on the Torus. When the potential is non-convex, the mixing time is $O(e^{1/\kappa})$. You can add a mixing drift to this to make the mixing time polynomial in $1/\kappa$.

  • Blow-up suppression in Nonlinear PDEs: If the mixing time of $dX = u(X) \, dt + \sqrt{\kappa} \, dW_t$ is small enough, then you can avoid “blow up” in certain nonlinear PDEs (like the Keller–Segel model for chemotaxis). Reference: Iyer, Xu, Zlatoš ‘19.

  • The long-time asymptotics of certain nonlinear, nonlocal, diffusive equations with a gradient flow structure. Reference: Carrillo, McCann, Villani

Interacting Particle Methods for Sampling

Cutoff Phenomenon

  • In some Markov chains, the distance to equilibrium drops dramatically during a small window when mixing occurs. Techniques to prove this, examples and applications. Reference: Levin, Peres, Wilmer, Chapter 18

Randomly Switched Systems

  • A randomly switched system is one where you take finitely many vector fields $u_1, \dots u_N$, run the flow of one of them for a random (exponentially distributed time), and then switch to a different one and repeat. Under certain “user friendly” conditions these have a unique invariant measure and mix exponentially. Reference: Benaïm, Hurth, Strickler.