Skip to main content

Python Library for Random Walks

Project description

pyrandwalk-logo

:walking: Python Library for Random Walks

built with Python3 CodeFactor Document


Table of contents

Overview

Pyrandwalk is an educational tool for simulating random walks, calculating the probability of given state sequences, etc. Random walk is a representation of the discrete-time, discrete-value Markov chain model used in stochastic processes.

PyPI Counter
Github Stars
Branch master dev
CI

Installation

Source code

  • Download Version 1.1 or Latest Source
  • Run pip install -r requirements.txt or pip3 install -r requirements.txt (Need root access)
  • Run python3 setup.py install or python setup.py install (Need root access)

PyPI

Usage

>>> from pyrandwalk import *
>>> import numpy as np
>>> states = [0, 1, 2, 3, 4]
>>> trans = np.array([[1,    0, 0,    0, 0],
...                   [0.25, 0, 0.75, 0, 0],
...                   [0, 0.25, 0, 0.75, 0],
...                   [0, 0, 0.25, 0, 0.75],
...                   [0, 0,    0, 1,    0]])
>>> rw = RandomWalk(states, trans)

We are simulating random walks on the above graph (weights are probabilities):

Probability of A Sequence

Imagine you want to calculate probability which you start from state 2, go to state 1 and stuck in state 0. What's the probability of these walk sequences?

>>> rw.prob_sec([2, 1, 0])
0.0125

Initial probability distribution is assumed to be uniform by default but you can change it by passing optional argument initial_dist:

>>> rw.prob_sec([2, 1, 0], initial_dist=[0, 0, 1, 0, 0])
0.0625

Run a random walk

You can start a random walk on given markov chain and see the result:

>>> states, probs = rw.run()
>>> states
[4, 3, 4, 3, 4, 3, 4, 3, 2, 3, 4]
>>> probs
[0.2, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.75, 0.75]

By default your random walk will contain 10 steps, but you can change it by passing optional argument ntimes:

>>> states, probs = rw.run(ntimes=20)
>>> states
[3, 4, 3, 4, 3, 2, 1, 2, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 2, 3]
>>> probs
[0.2, 0.75, 1.0, 0.75, 1.0, 0.25, 0.25, 0.75, 0.75, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.75]

And if you want to see what's going on down there during the simulation you can set the show flag:

>>> states, probs = rw.run(ntimes=30, show=True)
1 --> 2  (p = 0.750)
2 --> 3  (p = 0.750)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 2  (p = 0.250)
2 --> 1  (p = 0.250)
1 --> 2  (p = 0.750)
2 --> 3  (p = 0.750)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 4  (p = 0.750)
4 --> 3  (p = 1.000)
3 --> 2  (p = 0.250)
2 --> 3  (p = 0.750)
>>> states
[1, 2, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 2, 1, 2, 3, 4, 3, 4, 3, 4, 3, 4, 3, 4, 3, 2, 3]
>>> probs
[0.2, 0.75, 0.75, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.25, 0.75, 0.75, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.75, 1.0, 0.25, 0.75]

Final Probability Distribution

You can easily find out the final probability distribution of you random walk by:

>>> rw.final_dist()
array([1., 0., 0., 0., 0.])

Which implies that the walk will in state 0 for sure as time goes on.

Is it irreducible?

You can check if your Markov chain is irreducible to lower rank ones or not by:

>>> rw.is_irreducible()
False

nth transition matrix

If you want to see what's the probability of moving from state i to j with n steps, you can easily calculate the nth transition matrix by:

>>> rw.trans_power(2)
array([[1.    , 0.    , 0.    , 0.    , 0.    ],
       [0.25  , 0.1875, 0.    , 0.5625, 0.    ],
       [0.0625, 0.    , 0.375 , 0.    , 0.5625],
       [0.    , 0.0625, 0.    , 0.9375, 0.    ],
       [0.    , 0.    , 0.25  , 0.    , 0.75  ]])

Graph edges

You can have your final graph edges in a list containing tuples like (from, to, probability) for each edge by:

>>> rw.get_edges()
[(0, 0, 1.0), (1, 0, 0.25), (1, 2, 0.75), (2, 1, 0.25), (2, 3, 0.75), (3, 2, 0.25), (3, 4, 0.75), (4, 3, 1.0)]

Graph

Making a networkx graph object from your random walk process is also token care of by this library:

>>> rw_graph = rw.get_graph()

Colors of Nodes [will be removed]

Until now we could not show graphs with self-loops using networkx so as far as this feature being added to networkx, we're using blue color for ordinary states and red color for states with self-loop.

>>> rw.get_colormap()
['red', 'blue', 'blue', 'blue', 'blue']

Type of Classes

For knowing which class is recurrent or transient you can use above method, you can also have reduced transition matrix for each set.

>>> rw_class_types = rw.get_typeof_classes()
>>> rw_class_types['recurrent']
([0], array([[1.]]))
>>> rw_class_types['transient'][0]
[1, 2, 3, 4]
>>> rw_class_types['transient'][1]
array([[0.  , 0.75, 0.  , 0.  ],
       [0.25, 0.  , 0.75, 0.  ],
       [0.  , 0.25, 0.  , 0.75],
       [0.  , 0.  , 1.  , 0.  ]])

The Best Policy Problems

For making the best policy problems for your random walk you can easily:

>>> states = [0, 1, 2]
>>> trans = np.array([[1, 0, 0], [1/2, 0, 1/2], [0, 1, 0]])
>>> rw = RandomWalk(states, trans, payoff=[0, 1, 4], cost=[1, 0, 2], discount=0.5)
>>> rw.best_policy()
{'continue': [], 'stop': [0, 1, 2]}

References

1- Lawler, Gregory F. Introduction to stochastic processes. Chapman and Hall/CRC, 2018.
2- Markusfeng
Icon made by Becris from www.flaticon.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyrandwalk-1.1.tar.gz (12.4 kB view details)

Uploaded Source

Built Distribution

pyrandwalk-1.1-py2.py3-none-any.whl (10.7 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file pyrandwalk-1.1.tar.gz.

File metadata

  • Download URL: pyrandwalk-1.1.tar.gz
  • Upload date:
  • Size: 12.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.4 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.9.6

File hashes

Hashes for pyrandwalk-1.1.tar.gz
Algorithm Hash digest
SHA256 d62cd7f899dad2be078eb5fb339da0780e2dacc3ce030f81dce0b2bbccd94c52
MD5 a0c8032642db6fbcee8b4a71d3044214
BLAKE2b-256 9c40f0a2c63934360c4c577421796f42e37029c4f7e6034c14087d2a0f4bbf9c

See more details on using hashes here.

File details

Details for the file pyrandwalk-1.1-py2.py3-none-any.whl.

File metadata

  • Download URL: pyrandwalk-1.1-py2.py3-none-any.whl
  • Upload date:
  • Size: 10.7 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.4 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.9.6

File hashes

Hashes for pyrandwalk-1.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 6caa4f9b1bc0222fcf16cf9348a49c63fc56ca476a1f1c59fcb3d00b3c781e39
MD5 68f07cad922adc1d1fcc9314a25c7fd6
BLAKE2b-256 20206f04a7cfd6c54b51bd36a9f3bdf2faa26fd8a4299ce4c80233986cc1f061

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page