import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
Let $X_0, X_1, \dots$ be a Markov chain with states $\{0, 1, 2\}$ and transition matrix
$$ \mathbf{P} = \begin{bmatrix} 0.1 & 0.2 & 0.7 \\ 0.9 & 0.1 & 0.0 \\ 0.1 & 0.8 & 0.1 \\ \end{bmatrix} $$Assume that $\mu_0 = (0.3, 0.4, 0.3)$. Find $\mathbb{P}(X_0 = 0, X_1 = 1, X_2 = 2)$ and $\mathbb{P}(X_0 = 0, X_1 = 1, X_2 = 1)$.
Solution:
For any $i, j, k \in \{0, 1, 2\}$,
$$ \begin{align*} \mathbb{P}(X_0 = i, X_1 = j, X_2 = k) &= \mathbb{P}(X_2 = k \mid X_1 = j) \mathbb{P}(X_1 = j \mid X_0 = i) \mathbb{P}(X_0 = i) \\ &= P_{jk} P_{ij} \mathbb{P}(X_0 = i) \end{align*} $$Thus,
$$ \begin{align*} \mathbb{P}(X_0 = 0, X_1 = 1, X_2 = 2) &= P_{12} P_{01} \cdot \mu_{0}^0 = 0 \cdot 0.2 \cdot 0.3 = 0 \end{align*} $$and
$$ \begin{align*} \mathbb{P}(X_0 = 0, X_1 = 1, X_2 = 1) &= P_{11} P_{01} \cdot \mu_{0}^0 = 0.2 \cdot 0.2 \cdot 0.3 = 0.012. \end{align*} $$Let $Y_1, Y_2, \dots$ be a sequence of iid observations such that $\mathbb{P}(Y = 0) = 0.1$, $\mathbb{P}(Y = 1) = 0.3$, $\mathbb{P}(Y = 2) = 0.2$, $\mathbb{P}(Y = 3) = 0.4$. Let $X_0 = 0$ and let
$$X_n = \max \{Y_1, \dots, Y_n \}.$$Show that $X_0, X_1, \dots$ is a Markov chain and find the transition matrix.
Solution:
A process $\{X_n : n \in T \}$ is a Markov chain if
$$\mathbb{P}(X_n = x \mid X_0, \dots, X_{n-1}) = \mathbb{P}(X_n = x \mid X_{n-1})$$Observe $X_k = \max(Y_0, \dots, Y_k) \le \max(Y_0, \dots, Y_{k+1}) = X_{k+1}$. Observe:
$$ \mathbb{P}(X_n = j \mid X_{n-1} = i) = \begin{cases} 0 & j < i \\ \sum_{x \in \mathcal{X}, x \le j} \mathbb{P}(Y = x) & j = i \\ \mathbb{P}(Y = j) & j > i. \end{cases} $$Conditioning the above expression on earlier values of the sequence $\{X_k\}$ will not change the RHS. Therefore, the sequence is a Markov chain. Furthermore, the transition matrix is given by the above formula. Explicitly:
$$ \mathbf{P} = \begin{bmatrix} 0.1 & 0.3 & 0.2 & 0.4 \\ 0 & 0.4 & 0.2 & 0.4 \\ 0 & 0 & 0.6 & 0.4 \\ 0 & 0 & 0 & 1 \end{bmatrix} $$Consider a two-state Markov chain with states $\mathcal{X} = \{1, 2\}$ and transition matrix
$$ \mathbf{P} = \begin{bmatrix} 1 - a & a \\ b & 1 - b \\ \end{bmatrix} $$where $0 < a < 1$ and $0 < b < 1$. Prove that
$$ \lim_{n \to \infty} \mathbf{P}^n = \begin{bmatrix} \frac{b}{a + b} & \frac{a}{a + b} \\ \frac{b}{a + b} & \frac{a}{a + b} \\ \end{bmatrix} $$Solution:
Diagonalizing $\mathbf{P}$:
$$ \mathbf{P} = \mathbf{Q} \mathbf{D} \mathbf{Q}^{-1} = \begin{bmatrix} 1 & -a \\ 1 & b \\ \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1 - (a + b) \end{bmatrix} \begin{bmatrix} 1 & -a \\ 1 & b \\ \end{bmatrix}^{-1}. $$Since $$\lim_{n \to \infty} \mathbf{D}^n = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$$
we have
$$ \begin{align*} \mathbf{P}^n &= \mathbf{Q} \mathbf{D}^n \mathbf{Q}^{-1} \\ &\to \begin{bmatrix} 1 & -a \\ 1 & b \\ \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 1 & -a \\ 1 & b \\ \end{bmatrix}^{-1} \\ &= \begin{bmatrix} \frac{b}{a + b} & \frac{a}{a + b} \\ \frac{b}{a + b} & \frac{a}{a + b} \\ \end{bmatrix} \end{align*} $$Consider the chain from question 3 and set $a = .1$ and $b = .3$. Simulate the chain. Let
$$\hat{p}_n(1) = \frac{1}{n} \sum_{i = 1}^n I(X_i = 1)$$$$\hat{p}_n(2) = \frac{1}{n} \sum_{i = 1}^n I(X_i = 2)$$be the proportion of times the chain is in state 1 and state 2. Plot $\hat{p}_n(1)$ and $\hat{p}_n(2)$ versus $n$ and verify that they converge to the values predicted from the answer in the previous question.
Solution:
By definition 23.24, the chain has a limiting distribution, with $\pi = \left[\frac{b}{a+b}, \frac{a}{a+b}\right]$. Furthermore, it is easy to verify the chain is ergodic. Therefore, by Theorem 23.35, $\hat{p}_n(i) \rightarrow \pi_i$, letting $g(x) = I(x = i)$ for both $i =1$ and $i = 2$. Explicitly, we expect $\hat{p}_n(1) \to 0.3 / (0.1 + 0.3) = 0.75$ and $\hat{p}_n(2) \to 0.25$.
a = 0.1
b = 0.3
P = [[1 - a, a], [b, 1 - b]]
N = 1000
initial_state = 1
current_state = initial_state
n_1 = 0
n_2 = 0
p_hat_1 = []
p_hat_2 = []
for n in range(1, N):
transition_probs = P[current_state - 1]
next_state = np.random.choice([1, 2], p=transition_probs)
current_state = next_state
if current_state == 1:
n_1 += 1
else:
n_2 += 1
p_hat_1.append(n_1 / n)
p_hat_2.append(n_2 / n)
fig = plt.figure(figsize=(10, 6))
plt.plot(range(1, N), p_hat_1, label="$\hat{p}_n(1)$")
plt.hlines([0.25, 0.75], xmin=0, xmax=N, linestyles="dashed", color="black")
plt.plot(range(1, N), p_hat_2, label="$\hat{p}_n(2)$")
plt.grid()
plt.xlabel("$n$")
plt.legend()
plt.show()
An important Markov chain is the branching process which is used in biology, genetics, nuclear physics, and many other fields. Suppose that an animal has $Y$ children. Let $p_k = \mathbb{P}(Y = k)$. Hence $p_k \ge 0$ for all $k$ and $\sum_{k=0}^{\infty} p_k = 1$. Assume each animal has the same lifespan and that they produce offspring according to the distribution $p_k$. Let $X_n$ be the number of animals in the $n^{\text{th}}$ generation. Let $Y_1^{(n)}, \dots, Y_{X_n}^{(n)}$ be the offspring produced in the $n^{\text{th}}$ generation. Note that
$$X_{n+1} = Y_1^{(n)} + \dots + Y_{X_n}^{(n)}.$$Let $\mu = \mathbb{E}(Y)$ and $\sigma^2 = \mathbb{V}(Y)$. Assume throughout this question that $X_0 = 1$. Let $M(n) = \mathbb{E}(X_n)$ and $V(n) = \mathbb{V}(X_n)$.
(a) Show that $M(n + 1) = \mu M(n)$ and $V(n + 1) = \sigma^2 M(n) + \mu^2 V(n)$.
$$ \begin{align*} M(n + 1) &= \mathbb{E}(X_{n+1}) \\ &= \mathbb{E}\left[ \mathbb{E} \left[X_{n+1} \mid X_n \right] \right] \tag{Adam's Law} \\ &= \mathbb{E}\left[ \mathbb{E} \left[Y_1^{(n)} + \dots + Y_{X_n}^{(n)} \mid X_n \right] \right] \\ &= \mathbb{E}\left[ \mu X_n \right] \\ &= \mu M(n) \end{align*} $$$$ \begin{align*} V(n + 1) &= \mathbb{V}(X_{n+1}) \\ &= \mathbb{E}\left[ \mathbb{V} \left[ X_{n+1} \mid X_n\right]\right] + \mathbb{V}\left[\mathbb{E}\left[X_{n+1} \mid X_n \right]\right] \tag{Eve's Law} \\ &= \mathbb{E}\left[ \mathbb{V} \left[ Y_1^{(n)} + \dots + Y_{X_n}^{(n)} \mid X_n \right]\right] + \mathbb{V}\left[\mathbb{E} \left[ Y_1^{(n)} + \dots + Y_{X_n}^{(n)} \mid X_n \right] \right] \\ &= \mathbb{E}\left[ \sigma^2 X_n\right] + \mathbb{V}\left[\mu X_n\right] \\ &= \sigma^2 M(n) + \mu^2 V(n) \end{align*} $$(b) Show that $M(n) = \mu^n$ and that $V(n) = \sigma^2 \mu^{n-1}(1 + \mu + \dots + \mu^{n-1})$.
The results follow by induction and the results in the previous answer. For $M(n)$, observe
$$M(1) = \mathbb{E}[X_1] = \mathbb{E}\left[Y_{1}^{(1)}\right] = \mu$$and
$$M(n) = \mu^n \Rightarrow M(n+1) = \mu M(n) = \mu^{n+1}.$$Meanwhile, for $V(n)$, observe
$$V(1) = \mathbb{V}[X_1] = \mathbb{V}\left[ Y_{1}^{(1)} \right] = \sigma^2$$and
$$ \begin{align*} V(n) = \sigma^2 \mu^{n-1}(1 + \mu + \dots + \mu^{n-1}) \Rightarrow V(n+1) &= \sigma^2 M(n) + \mu^2 V(n) \\ &= \sigma^2 \mu^{n} + \mu^2 \sigma^2 \mu^{n-1}(1 + \mu + \dots + \mu^{n-1}) \\ &= \sigma^2 \mu^{n} + \sigma^2 \mu^{n}(\mu + \dots + \mu^{n}) \\ &= \sigma^2 \mu^{n}\left(1 + \mu + \dots + \mu^{n})\right) \end{align*} $$(c) What happens to the variance if $\mu > 1$? What happens to the variance if $\mu = 1$? What happens to the variance if $\mu < 1$?
$\mu > 1 \Rightarrow V(n) > \sigma^2 \mu^{n-1} \rightarrow \infty$
$\mu = 1 \Rightarrow V(n) = n \sigma^2 \rightarrow \infty$
By L'Hôpital's Rule, $\mu < 1 \Rightarrow V(n) = < n \sigma^2 \mu^{n - 1} \rightarrow 0$,
(d) The population goes extinct if $X_n = 0$ for some $n$. Let us thus define the extinction time $N$ by
$$N = \min \{n : X_n =0 \}.$$Let $F(n) = \mathbb{P}(N \le n)$ be the CDF of the random variable $N$. Show that
$$F(n) = \sum_{k=0}^{\infty} p_k(F(n - 1))^k, \,\, n = 1, 2, \dots$$Hint: Note that the event $\{N \le n\}$ is the same as event $\{X_n = 0\}$. Thus, $\mathbb{P}(\{N \le n\}) = \mathbb{P}(\{X_n = 0\})$. Let $k$ be the number of offspring of the original parent. The population becomes extinct at time $n$ if and only if each of the $k$ sub-populations generated from the $k$ offspring goes extinct in $n - 1$ generations.
Solution:
Expressing explicitly the logic of the hint:
$$ \begin{align*} F(n) &= \mathbb{P}(N \le n) \\ &= \mathbb{P}(X_n = 0) \\ &= \mathbb{E}\left[\mathbb{P}(X_n = 0 \mid X_{1})\right] \\ &= \sum_{k=0}^{\infty} p_k \left( \mathbb{P}(X_n = 0 \mid X_{1} = k) \right) \\ &= \sum_{k=0}^{\infty} p_k \left( \mathbb{P}(X_{n-1} = 0) \right)^k\\ &= \sum_{k=0}^{\infty} p_k \left( F(n - 1)\right)^k. \\ \end{align*} $$(e) Suppose that $p_0 = 1/4$, $p_1 = 1/2$, $p_2 = 1/4$. Use the formula from (5d) to compute the CDF $F(n)$.
Solution:
Plugging into the formula, and letting $f_{n} = F(n)$, we get the quadratic recurrance equation:
$$f_n = \frac{1}{4} + \frac{f_{n-1}}{2} + \frac{f_{n-1}^2}{4} = \frac{(f_{n-1} + 1)^2}{4}$$Since $X_0 = 1$, $f_0 = 0$.
f = []
f_n = 0
N = 100
for n in range(N):
f_n = ((f_n + 1) ** 2) / 4
f.append(f_n)
CDF = np.array(f)
plt.plot(range(N), CDF, label="CDF")
plt.grid()
plt.xlabel("$n$")
plt.ylabel("$F(n)$")
plt.title("Extinction Time $N$")
plt.legend()
plt.show()
Let
$$ \mathbf{P} = \begin{bmatrix} 0.40 & 0.50 & 0.10 \\ 0.05 & 0.70 & 0.25 \\ 0.05 & 0.50 & 0.45 \end{bmatrix} $$Find the stationary distribution $\pi$.
Solution:
By Definition 23.23, $\pi$ is a right eigenvector of $\mathbf{P}$, i.e., $\pi = \pi \mathbf{P} \Rightarrow \pi \left(\mathbf{P} - \mathbf{I}\right) = 0$. Noting
$$ \mathbf{P} = \frac{1}{20}\begin{bmatrix} 8 & 10 & 2 \\ 1 & 14 & 5 \\ 1 & 10 & 9 \end{bmatrix}, $$the problem reduces to solving a linear system, where we also stipulate $||\pi||_{L_1} = 1$:
$$ \begin{bmatrix} -12 & 1 & 1 \\ 10 & -6 & 10 \\ 2 & 5 & -11 \\ 1 & 1 & 1 \\ \end{bmatrix} \begin{bmatrix} \pi_1 \\ \pi_2 \\ \pi_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \\ \end{bmatrix}, $$which yields $\pi = [1/13, 5/8, 31/104]$.
P = (1 / 20) * np.array([[8, 10, 2], [1, 14, 5], [1, 10, 9]])
pi = np.array([1 / 13, 5 / 8, 31 / 104])
assert np.array_equal(pi @ P, pi)
Show that if $i$ is a recurrent state and $i \leftrightarrow j$, then $j$ is a recurrent state.
Solution:
Since $i \leftrightarrow j$, there exist $r$ and $s$ such that $p_{ij}(r) > 0$ and $p_{ij}(s) > 0$. Observe that for all $n$, $p_{jj}(n + r + s) \ge p_{ij}(r)p_{ii}(n)p_{ji}(s)$. Then,
$$ \begin{align*} \infty = \sum_{n} p_{ii}(n) \le \frac{1}{p_{ij}(r)p_{ji}(s)} \sum_{n} p_{jj}(n + r + s) \le \frac{1}{p_{ij}(r)p_{ji}(s)} \sum_{n} p_{jj}(n). \end{align*} $$Thus $j$ is recurrent.
Let
$$ \mathbf{P} = \begin{bmatrix} \frac13 & 0 & \frac13 & 0 & 0 & \frac13 \\ \frac12 & \frac14 & \frac14 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ \frac14 & \frac14 & \frac14 & 0 & 0 & \frac14 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix} $$Which states are transient? Which states are recurrent?
Solution:
Note that $C_1 = \{3, 5\}$ and $C_2 = \{6 \}$ are irreducible closed sets, so states 3,5, and 6 are recurrent. Meanwhile, states 1, 2, and 4 are transient due to the path $4 \to 2 \to 1 \to 6$.
Let
$$ \mathbf{P} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$$Show that $\pi = (1/2, 1/2)$ is a stationary distribution. Does this chain converge? Why/why not?
Solution:
It is a stationary distribution by definition 23.23:
$$ \begin{align*} \pi \mathbf{P} &= \begin{bmatrix} 1/2 & 1/2 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \\ &= \begin{bmatrix} 1/2 & 1/2 \end{bmatrix} = \pi \end{align*} $$However, the chain is periodic (with period 2), so it is not ergodic, thus we don't have a guarantee of convergence. Indeed, any sequence will oscillate between the two states forever.
Let $0 < p < 1$ and $q = 1 - p$. Let
$$ \mathbf{P} = \begin{bmatrix} q & p & 0 & 0 & 0 \\ q & 0 & p & 0 & 0 \\ q & 0 & 0 & p & 0 \\ q & 0 & 0 & 0 & p \\ 1 & 0 & 0 & 0 & 0 \\ \end{bmatrix} $$Find the limiting distribution of the chain.
Solution:
We can the limiting distribution by identifying the normalized eigenvector corresponding to eigenvalue 1 of $\mathbf{P}^T$, yielding $\pi \propto \begin{bmatrix} p^{-4} & p^{-3} & p^{-2} & p^{-1} & 1 \end{bmatrix}.$
p = 0.2
q = 1 - p
P = np.array(
[
[q, p, 0, 0, 0],
[q, 0, p, 0, 0],
[q, 0, 0, p, 0],
[q, 0, 0, 0, p],
[1, 0, 0, 0, 0],
]
)
v = np.real(np.linalg.eig(P.T)[1][:, 0])
v = v / np.sum(v)
pi = np.power(p, np.arange(-4, 1))
pi = pi / np.sum(pi)
print(f"From numerical calculation: {v}")
print(f"From formula: {pi}")
From numerical calculation: [0.80025608 0.16005122 0.03201024 0.00640205 0.00128041] From formula: [0.80025608 0.16005122 0.03201024 0.00640205 0.00128041]
Let $X(t)$ be an inhomogeneous Poisson process with intensity function $\lambda(t) > 0$. Let $\Lambda(t) = \int_0^t \lambda(u) du$. Define $Y(s) = X(t)$ where $s = \Lambda(t)$. Show that $Y(s)$ is a homogeneous Poisson process with intensity $\lambda = 1$.
Solution:
Since $\lambda(t) > 0$, $\Lambda(t)$ is strictly monotonic, which implies that $\Lambda^{-1}(t)$ exists. Therefore,
$$ \begin{align*} Y(t) &= Y(\Lambda(\Lambda^{-1}(t))) \\ &= X(\Lambda^{-1}(t)) \tag{$Y(\Lambda(t)) = X(t)$} \\ &\sim \text{Poisson}(m(\Lambda^{-1}(t))) \tag{Theorem 23.33} \\ &= \text{Poisson}(t) \tag{$m(t) = \Lambda(t)$}. \end{align*} $$Let $X(t)$ be a Poisson process with intensity $\lambda$. Find the conditional distribution of $X(t)$ given that $X(t + s) = n$.
Solution:
Let $X_s = X(s)$. From Bayes' Theorem:
$$ \begin{align*} \mathbb{P}(X_t = k \mid X_{t + s} = n) &= \frac{\mathbb{P}(X_{t + s} = n \mid X_t = k) \mathbb{P}(X_t = k)}{\mathbb{P}(X_{t + s} = n)} \end{align*} $$Noting
$$ \begin{align*} \mathbb{P}(X_t = j) = \frac{e^{-\lambda t}(\lambda t)^j}{j!} \end{align*} $$and that
$$ \begin{align*} \mathbb{P}(X_{t+s} = n \mid X_{t} = k) = \mathbb{P}(X_s = n - k) \end{align*} $$we get
$$ \begin{align*} \mathbb{P}(X_t = k \mid X_{t + s} = n) &= \frac{e^{-\lambda s} (\lambda s)^{n-k} e^{-\lambda t}(\lambda t)^k n!}{e^{-\lambda (t + s)} (\lambda (t + s))^n k! (n-k)!} \\ &= {n \choose k} \left(\frac{s}{s + t} \right)^{n - k} \left(\frac{t}{s + t} \right)^k \end{align*} $$meaning $X_t \mid (X_{t + s} = n) \sim \text{Binom}(n, t / (s + t))$.
Let $X(t)$ be a Poisson process with intensity $\lambda$. Find the probability that $X(t)$ is odd, i.e. $\mathbb{P}(X(t) = 1, 3, 5, \dots)$.
Solution:
For any integer $k$,
$$\mathbb{P}(X_t = k) = \frac{e^{-\lambda t}(\lambda t)^k}{k!}.$$Therefore,
$$ \begin{align*} \mathbb{P}(X_t \text{ is odd}) &= \sum_{k=0}^{\infty} \mathbb{P}(X_t = 2k +1) \\ &= \sum_{k=0}^{\infty} \frac{e^{-\lambda t}(\lambda t)^{2k + 1}}{(2k + 1)!} \\ &= e^{-\lambda t} \sum_{k=0}^{\infty} \frac{(\lambda t)^{2k + 1}}{(2k + 1)!} \\ &= e^{-\lambda t} \sinh(\lambda t) \tag{Maclaurin expansion of $\sinh(\lambda t)$} \end{align*} $$lambdas = np.logspace(-2, 1, 4)
tt = np.linspace(0, 10, 100)
for lam in lambdas:
plt.plot(
tt, np.exp(-lam * tt) * np.sinh(lam * tt), label=f"$\lambda = {lam}$"
)
plt.legend()
plt.grid()
plt.xlabel("$t$")
plt.title("P($X_t$ is odd)")
plt.show()
Suppose that people logging in to the University computer system is described by a Poisson process $X(t)$ with intensity $\lambda$. Assume that a person stays logged in for some random time with CDF $G$. Assume these times are all independent. Let $Y(t)$ be the number of people on the system at time $t$. Find the distribution of $Y(t)$.
Solution:
Note that given $X_t = n$, the arrival times of the $n$ people who have logged on by $t$ are distributed as $\text{Uniform}(0, t)$. Therefore, letting
$$p = \mathbb{P}(\text{a person, having logged on by $t$, is still logged on at $t$}) = \frac{1}{t} \int_0^t 1 - G(t - u) \, du.$$we have $Y_t \mid X_t = n \sim \text{Binomial}(n, p)$. Since $X_t \sim \text{Poisson}(\lambda t)$, the marginal distribution of $Y_t$ is $Y_t \sim \text{Poisson}(\lambda t p)$.
Let $X(t)$ be a Poisson process with intensity $\lambda$. Let $W_1, W_2, \dots$ be the waiting times. Let $f$ be an arbitrary function. Show that
$$\mathbb{E} \left( \sum_{i = 1} ^ {X(t)} f(W_i) \right) = \lambda \int_0^t f(w) dw.$$Solution:
Conditioned on $X_t = n$, $\sum_{i=1}^n f(W_i)$ is identically distributed as $\sum_{i=1}^n f(U_i)$, where $U_i \sim \text{Uniform}(0, t)$. Thus,
$$\mathbb{E}\left[ \sum_{i = 1} ^ {n} f(W_i) \mid X_t = n\right] = \frac{n}{t} \int_0^t f(w) \, dw.$$Therefore,
$$ \begin{align*} \mathbb{E} \left( \sum_{i = 1} ^ {X(t)} f(W_i) \right) & = \sum_{n=0}^{\infty} \mathbb{E} \left[ \sum_{i = 1} ^ {n} f(W_i) \mid X_t = n\right]\mathbb{P}(X_t = n) \\ &= \sum_{n = 0}^{\infty} \frac{n}{t} \left( \int_0^t f(w) \, dw \right) \mathbb{P}(X_t = n) \\ &= \left( \frac{1}{t} \int_0^t f(w) \, dw \right) \sum_{n = 0}^{\infty} n \cdot \mathbb{P}(X_t = n) \\ &= \left( \frac{1}{t} \int_0^t f(w) \, dw \right) \mathbb{E}(X_t) \\ &= \lambda \int_0^t f(w) \, dw. \end{align*} $$A two-dimensional Poisson point process is a process of random points on the plan such that (i) for any set $A$, the number of points falling in $A$ is Poisson with mean $\lambda \mu(A)$ where $\mu(A)$ is the area of $A$, (ii) the number of events in non-overlapping regions is independent. Consider an arbitrary point $x_0$ in the plane. Let $X$ denote the distance from $x_0$ to the nearest random point. Show that
$$\mathbb{P}(X > t) = e ^ {-\lambda \pi t ^ 2}$$and
$$\mathbb{E}(X) = \frac{1}{2 \sqrt{\lambda}}.$$Solution:
For all $t > 0$, let $A_t = \{x : |x - x_0| \le t\}$. Note that $\mu(A_t) = \pi t^2$. Then,
$$ \begin{align*} \mathbb{P}(X > t) &= \mathbb{P}(\text{zero points fall in } A_t) \\ &= \frac{e^{- \lambda \mu(A_t)} (\lambda \mu(A_t))^0}{0!} \\ &= e^{- \lambda \pi t^2}. \end{align*} $$Therefore, the CDF of $X$ is
$$F_X(t) = 1 - e ^ {-\lambda \pi t ^ 2}$$and the PDF is
$$f_X(t) = 2 \lambda \pi t e ^ {-\lambda \pi t ^ 2}.$$Thus, the expected value can be computed by integrating by parts and recalling the integral of an even Gaussian function over the positive reals: $\int_0^{\infty} e ^{-ax^2} \, dx = \frac12 \sqrt{\pi / a}$:
$$ \begin{align*} \mathbb{E}(X) &= \int_0^{\infty} t \cdot f_X(t) \, dt \\ &= \int_0^{\infty} 2 \lambda \pi t ^ 2 e ^ {-\lambda \pi t ^ 2} \, dt \\ &= -t e ^ {-\lambda \pi t ^ 2} \rvert_0^{\infty} + \int_0^{\infty} e ^{-\lambda \pi t^2} \, dt \tag{integrating by parts, where $u = -t$ and $v = e ^{-\lambda \pi t ^2}$}\\ &= \frac{1}{2 \sqrt{\lambda}} \tag{left term vanishes, Gaussian integral}. \end{align*} $$