It is a story that a random walk on an integer grid returns to the starting point if it is 2D or less, but it does not return if it is 3D.
Simulate with Python and check how escape probability converges (does / does not finish).
It was interesting that Cahpter4 in [Foundation of Data Science] 1 talked about Markov Chain Monte Carlo ~ MCMC ~ Random Walk.
In a d-dimensional integer grid, a random walk (= randomly selected from 2d adjacent points and moved) is performed infinitely, starting from the origin. At this time, I am wondering if it may return to the origin again, or if it will go far and never come back.
In order to obtain this approximately by simulation, a random walk with a length of n steps is performed N times, and the empirical probability of returning to the origin at least once is calculated.
I tried to visualize the state of the 3D random walk (100000 steps) with reference to [here] 2. In this example, it seems that it will not start from around the origin (upper left), go to the lower right, and return. Repeat this trial many times to find the probability that a trial will return to the origin.
We have devised numpy array operation and Japanese display, but the explanation is omitted.
% matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
sns.set()
matplotlib.rcParams['font.family'] = 'sans-serif'
matplotlib.rcParams['font.sans-serif'] = ['Hiragino Maru Gothic Pro', 'Yu Gothic',
'Meirio', 'Takao', 'IPAexGothic', 'IPAPGothic', 'VL PGothic', 'Noto Sans CJK JP']
def try_mc(n, N, dim):
"""Perform N random walks starting from the origin and moving n steps in the dim-dimensional grid.
"""
index = np.random.randint(dim, size=(N, n))
sign = np.random.choice([-1,1], size=(N, n))
ps = np.zeros((N, n,dim), dtype=int)
ps[np.arange(N).reshape(N,1), np.arange(n), index] += sign
ps = ps.cumsum(axis=1)
return ps
n = 100000
N = 1000
dims = [1,2,3]
fig, axes = plt.subplots(1,3, sharey=True, sharex=True, figsize=(15,5))
for i, dim in enumerate(dims):
ps = try_mc(n, N, dim)
ds = np.abs(ps).sum(axis=2)
ax = axes[i]
ax.set_xlabel("Random walk length")
if i==0:
ax.set_ylabel("Probability of returning to the origin more than once")
prob = ((ds == 0).cumsum(axis=1)>0).sum(axis=0) / N
ax.plot(prob)
If you continue the random walk, it will come back someday, so I would like the probability to converge to 1 ...
In 1D and 2D, the probability of returning is 1, so if you continue for a long enough time, you will eventually return to the origin, but in 3D, "the probability of not returning is greater than 0", so if you continue for a long enough time It means that you will never come back (because even if you come back once or twice in the middle, if you repeat the restart, you will get a trial that will not come back eventually).
(Strictly speaking, it doesn't come back, but it goes to infinity before returning to the origin, so I don't understand the expression well.)
Mathematically, the value escape probability
Is defined as. In this lattice graph, $ i = origin $, $ j = $ {point } at a distance of n from the origin (Manhattan), and $ p_ {escape} $ when $ n \ to \ infinity $ was calculated. It is an image. When $ dim = 1,2 $, $ p_ {escape} = 0 $, and when $ dim = 3 $,
It is relatively easy to prove that. This proof shows that escape probablity is used in an electric circuit with a 1Ω resistor at each edge of the graph.
($ c_ {eff} $ is the combined conductance between i and j, $ c_i $ is the sum of the conductances of the edges connecting to $ i $) It is technically interesting to use what is expressed as, so if you are interested, please see [References] 1. In addition to escape probability, there are other topics such as the relationship between voltage, current and random walk, and the relationship between commute time and combined resistance, which is interesting. (By the way, the exercise "finding the escape probability in a computer experiment" is included in the exercise.)
Recommended Posts