Techniques exist for determining the long run behaviour of markov chains. Transition graph analysis can reveal the recurrent classes, matrix calculations can determine stationary distributions for those classes and various theorems involving periodicity will reveal whether those stationary distributions are relevant to the markov chain’s long run behaviour.
However, for markov chains of modest size, simply determining the probability distribution vectors for say the next 100 time steps will usually reveal the system’s long run behaviour with very little effort. This tool performs those calculations.
Given an initial probability distribution (row) vector v(0) and a transition matrix A, this tool calculates the future probability distribution vectors for time t (t = 1,2,3,...) using the relationship
v(t) = v(t-1) A
In this form, the ijth element of the matrix A is the conditional probability
Aij = P(System will be in state j at time t | It is in state i at time t-1)
Hence within each row of A, the elements sum to 1.This is the formulation of Markov chains favoured by most statisticians.
Some textbooks "reverse" the formulation, using a transition matrix B which is the tranpose of the matrix given above. Within each column of B, the elements sum to 1. Then the probability distribution vectors become column vectors given by the relationship
v(t) = B v(t-1)
If you’re used to that presentation you’ll need to reverse your thinking to use this tool.
When entering data into the transition matrix:
The 2nd point is useful when transition probabilities are fractions. For example, if the transition probabilities for a row are 2/7, 3/7, 2/7, you can instead enter 2, 3, 2. The program will scale your entries to ensure the elements in each row do sum to 1.