Assignment 5
Assigned 2025-10-01, due 2025-10-08
Question 1
Assume $(X_n)$ is a time homogeneous Markov chain on a finite state space $\mathcal X$ with transition matrix $P$. Let $\mathcal A = \set{a \in \mathcal X \st P(a, a) = 1}$ be the set of all absorbing states. Suppose for every $x \in \mathcal X$ there exists $a \in \mathcal A$ and $N \in \N$ such that $P^N(x, a) > 0$.
-
True or false: There exists a probability distribution $\pi$ such that for every probability distribution $\mu_0$ we have $\norm{\mu_0 P^n - \pi}_\TV \to 0$ as $n \to \infty$. Prove it, or find a counter example.
-
True or false: For every probability distribution $\mu_0$, there exists a probability distribution $\pi$ such that we have $\norm{\mu_0 P^n - \pi}_\TV \to 0$ as $n \to \infty$. Prove it, or find a counter example.
Question 2
Solve this question allowing you to draw samples in $O(1)$ time after doing $O(N)$ setup.
Question 3
The figure on the right here illustrates rejection sampling from the target distribution $q$, starting from a proposal distribution $p$. Your task is to write code to reproduce this figure. Here are a few things that will help you:
-
$p$ is the PDF of a normal distribution with standard deviation $\sigma_1 = 0.3$, conditioned on belonging to the interval $[-1, 1]$.
-
$q$ is the PDF of a normal distribution with standard deviation $\sigma_1 = 0.1$, conditioned on belonging to the interval $[-1, 1]$.
-
You can use
p = sp.truncnorm( ... ).pdfandq = sp.truncnorm( ... ).pdfto produce Python functions for the above PDFs. -
You can sample from the PDF $p$ using random.normal, and rejecting all values outside $[-1, 1]$. (Rejection sampling tells you this will work.)
-
Choose
N=1000. LetXbe an array ofNsamples from $p$,Ube an array ofN$\Unif([0, 1])$ samples, and $M = \max q /p$.scatter( X, U*M*p(X), s=3 )will produce a scatter plot with points uniformly distributed under the graph of $M p$. Figure out how to color the points based on whether or not they are above the graph of $q$, and add in plots of $M p$ and $q$ and you should have the desired figure.
Turn in a screenshot of the relevant portions of the code, and the output.
Question 4
Suppose $X_n$ are i.i.d. samples from a distribution $p$, let $\mu$ be the mean of $p$, and define the empirical mean, $\hat \mu_N$ by $\hat \mu_N \defeq \frac{1}{N} \sum_{n = 1}^N X_n$. By the law of large numbers we know $\mu_N \to \mu$; moreover for every fixed $N$, $\E \hat \mu_N = \mu$. For this reason we call $\hat \mu_N$ an unbiased estimator for the mean $\mu$.
Can you find an unbiased estimator for the variance? Explicitly, let $\sigma^2$ be the variance of $p$. Find a formula that defines $\hat \sigma^2_N$ in terms of $X_1$, …, $X_N$ (without using $\mu$, $\sigma$, or any other property of $p$) so that for every $N$, $\E \hat \sigma^2_N = \sigma^2$, and as $N \to \infty$ we have $\hat \sigma^2_N \to \sigma^2$.
Hint: To get you started try $\tilde \sigma^2_N = \frac{1}{N} \sum_{n = 1}^N (X_n - \hat \mu_N)^2$. This is an estimator (in the sense that $\tilde \sigma^2_N \to \sigma^2$ as $N \to \infty$, but it’s not unbiased as you won’t have $\E \tilde \sigma^2_N = \sigma^2$ for every $N$. Can you modify this somehow to make it unbiased?
Note: We have also been imprecise about the notion in which $\hat \sigma_N^2 \to \sigma^2$. Prove convergence of $\hat \sigma_N^2$ to $\sigma^2$ in whatever sense you had when you studied the law of large numbers.