It describes almost the same content (and the explanation is simple!) [[MH method] I wrote MCMC in Python [Gibbs-Sampler]](http://www.fisproject.jp/2015/12/mcmc I wrote a sample code for Gibbs sampling even though there is -in-python /). (Although it is full of reinventing the wheel (tears))
There is python code published at the beginning, why is it published? Since MCMC is a complicated story in both rabbits and corners, I thought it would be a good idea to think that there is a "code that works for the time being" for beginners.
So, rather than "I wrote it", the amount of code I wrote was surprisingly short, so I have the frustration of "I don't know what it is, but what is it after all!" If you can add even a little, ...
I also tried to organize the [MCMC] of my predecessor. ](Http://qiita.com/shogiai/items/bab2b915df2b8dd6f6f2).
It's a two-dimensional normal distribution:
python
code below. $ Z $ is a normalization constant.
##Preparation
import seaborn as sns
import numpy as np
from scipy.stats import norm
##Constants that determine the correlation of the two-dimensional normal distribution
b = 0.8
##Determining sample size
N = 10000
##List to put what you can sample
x = []
y = []
##The initial point is like this
x_init = 3.0
y_init = 9.0
##Random number generation according to a two-dimensional normal distribution.
##The place where the else and after of the for statement move, as it is commonly called "Kakukaku"
for i in range(N):
if i==0:
x.append(x_init) ##Initial point(x coordinate)
y.append(y_init) ##Initial point(y coordinate)
else:
x.append(norm(loc=y[i-1]*b, scale=1.0).rvs(size=1.0)[0]) #Fix the y coordinate and set the x coordinate to N(Center point=Fixed y coordinate,1)Select from
y.append(norm(loc=x[i]*b, scale=1.0).rvs(size=1.0)[0]) #Fix the x coordinate and set the x coordinate to N(Center point=Fixed x coordinate,1)Select from
%matplotlib inline ##If you want to draw with Jupyter Notebook, this is awesome!
sns.jointplot(np.array(x),np.array(y))
Even if you change the initial value, it will eventually become a steady distribution, so if you want only the "steady distribution", it is also a point to consider cutting off the sampling around the initial value a little later. This time, the motto was "Rabbit horns, simple code!"
Reference -MCMC Lecture (Yukito Iba) Difficulty ★★: This is a video that explains MCMC from a really simple place. -[Statistics] Let's explain sampling by Markov chain Monte Carlo method (MCMC) with animation. : The same distribution is carefully added to the animation.
Recommended Posts