Python Library for Random Walks
Project description
Table of contents
Overview
Pyrandwalk is a tool for simulating random walks, calculate the probability of given state sequences and etc. Random walk is a representation of discrete-time, discrete-value Markov chain model using in stochastic processes.
Github Stars |
Branch | master | dev |
CI |
Installation
Source code
- Download Version 0.09 or Latest Source
- Run
pip install -r requirements.txt
orpip3 install -r requirements.txt
(Need root access) - Run
python3 setup.py install
orpython setup.py install
(Need root access)
Usage
>>> from pyrandwalk import *
>>> import numpy as np
>>> states = [0, 1, 2, 3, 4]
>>> trans = np.array([[1, 0, 0, 0, 0],
... [0.25, 0, 0.75, 0, 0],
... [0, 0.25, 0, 0.75, 0],
... [0, 0, 0.25, 0, 0.75],
... [0, 0, 0, 1, 0]])
>>> rw = RandomWalk(states, trans)
We are simulating random walks on the above graph (weights are probabilities):
Probability of A Sequence
Imagine you want to calculate probability which you start from state 2, go to state 1 and stuck in state 0. What's the probability of these walk sequences?
>>> rw.prob_sec([2, 1, 0])
0.0125
Initial probability distribution is assumed to be uniform by default but you can change it by passing optional argument initial_dist
:
>>> rw.prob_sec([2, 1, 0], initial_dist=[0, 0, 1, 0, 0])
0.0625
Run a random walk
You can start a random walk on given markov chain and see the result:
>>> states, probs = rw.run()
>>> states
[4, 3, 4, 3, 4, 3, 4, 3, 2, 3, 4]
>>> probs
[0.2, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.75, 0.75]
By default your random walk will contain 10 steps, but you can change it by passing optional argument ntimes
:
>>> states, probs = rw.run(ntimes=20)
>>> states
[3, 4, 3, 4, 3, 2, 1, 2, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 2, 3]
>>> probs
[0.2, 0.75, 1.0, 0.75, 1.0, 0.25, 0.25, 0.75, 0.75, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.75]
And if you want to see what's going on down there during the simulation you can set the show
flag:
>>> states, probs = rw.run(ntimes=30, show=True)
1 --> 2 (p = 0.750)
2 --> 3 (p = 0.750)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 2 (p = 0.250)
2 --> 1 (p = 0.250)
1 --> 2 (p = 0.750)
2 --> 3 (p = 0.750)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 4 (p = 0.750)
4 --> 3 (p = 1.000)
3 --> 2 (p = 0.250)
2 --> 3 (p = 0.750)
>>> states
[1, 2, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 2, 1, 2, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 2, 3]
>>> probs
[0.2, 0.75, 0.75, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.25, 0.75, 0.75, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.75]
Final Probability Distribution
You can easily find out the final probability distribution of you random walk by:
>>> rw.final_dist()
array([1., 0., 0., 0., 0.])
Which implies that the walk will in state 0
for sure as time goes on.
Is it irreducible?
You can check if your Markov chain is irreducible to lower rank ones or not by:
>>> rw.is_irreducible()
False
nth transition matrix
If you want to see what's the probability of moving from state i
to j
with n
steps, you can easily calculate the nth transition matrix by:
>>> rw.trans_power(2)
array([[1. , 0. , 0. , 0. , 0. ],
[0.25 , 0.1875, 0. , 0.5625, 0. ],
[0.0625, 0. , 0.375 , 0. , 0.5625],
[0. , 0.0625, 0. , 0.9375, 0. ],
[0. , 0. , 0.25 , 0. , 0.75 ]])
Graph edges
You can have your final graph edges in a list containing tuples like (from, to, probability)
for each edge by:
>>> rw.get_edges()
[(0, 0, 1.0), (1, 0, 0.25), (1, 2, 0.75), (2, 1, 0.25), (2, 3, 0.75), (3, 2, 0.25), (3, 4, 0.75), (4, 3, 1.0)]
Graph
Making a networkx graph object from your random walk process is also token care of by this library:
>>> rw_graph = rw.get_graph()
Colors of Nodes [will be removed]
Until now we could not show graphs with self-loops using networkx so as far as this feature being added to networkx, we're using blue
color for ordinary states and red
color for states with self-loop.
>>> rw.get_colormap()
['red', 'blue', 'blue', 'blue', 'blue']
Type of Classes
For knowing which class is recurrent or transient you can use above method, you can also have reduced transition matrix for each set.
>>> rw_class_types = rw.get_typeof_classes()
>>> rw_class_types['recurrent']
([0], array([[1.]]))
>>> rw_class_types['transient'][0]
[1, 2, 3, 4]
>>> rw_class_types['transient'][1]
array([[0. , 0.75, 0. , 0. ],
[0.25, 0. , 0.75, 0. ],
[0. , 0.25, 0. , 0.75],
[0. , 0. , 1. , 0. ]])
References
1- Gregory F.Lawler, "Introduction to Stochastic Processes".
2- [Markusfeng](https://markusfeng.com/projects/graph/), "Graph / Finite State Machine Designer".
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pyrandwalk-0.9-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bea46f3833369a71fcc9f8189802a131f19706b84a9525316357f9c813ece3fa |
|
MD5 | 727d618a024f4378bce51ff24c885a42 |
|
BLAKE2b-256 | f7f23b29f7cf86f8351071957f25e8f40f63bfa1f9351b5289e0eb319f77df65 |