Simulation when transitioning three states according to the probability of matrix PROB_MATRIX (Example: The probability of moving from state 0 to 2 is PROB_MATRIX [0] [2] = 0.2)
MarkovChainSimulation1.py
import numpy as np
INIT_STATE = 0
PROB_MATRIX = [[0.7, 0.1, 0.2],
[0.2, 0.1, 0.7],
[0.7, 0.2, 0.1]]
def get_next_state(prob_array):
normalization_factor = sum(prob_array)
rand_num = np.random.rand(1)
s = 0
for i in range(len(prob_array)):
s += prob_array[i]/normalization_factor
if rand_num < s:
return i
return -1
if __name__ == "__main__":
print("Initial State: " + str(INIT_STATE))
cur_state = INIT_STATE
for i in range(30):
cur_state = get_next_state(PROB_MATRIX[cur_state])
print("State" + str(i+1) + " : " + str(cur_state))
> python MarkovChainSimulation1.py
Initial State: 0
State1 : 1
State2 : 2
State3 : 0
State4 : 0
State5 : 0
State6 : 0
State7 : 2
State8 : 0
State9 : 0
State10 : 0
State11 : 1
State12 : 2
State13 : 0
State14 : 0
State15 : 2
State16 : 0
State17 : 0
State18 : 0
State19 : 0
State20 : 0
State21 : 0
State22 : 2
State23 : 2
State24 : 0
State25 : 0
State26 : 2
State27 : 0
State28 : 0
State29 : 0
State30 : 2
Recommended Posts