Skip to main content

Python Computational Experiments

Project description

https://img.shields.io/pypi/v/pycomex.svg Documentation Status https://img.shields.io/badge/code%20style-black-000000.svg

PyComex - Python Computational Experiments

Microframework to improve the experience of running and managing records of computational experiments, such as machine learning and data science experiments, in Python.

Features

  • Automatically create (nested) folder structure for results of each run of an experiment

  • Simply attach metadata such as performance metrics to experiment object and they will be automatically stored as JSON file

  • Easily attach file artifacts such as matplotlib figures to experiment records

  • Log messages to stdout as well as permanently store into log file

  • Ready-to-use automatically generated boilerplate code for the analysis and post-processing of experiment data after experiments have terminated.

  • Experiment inheritance: Experiment modules can inherit from other modules and extend their functionality via parameter overwrites and hooks!

Installation

Install stable version with pip

pip3 install pycomex

Or the most recent development version

git clone https://github.com/the16thpythonist/pycomex.git
cd pycomex ; pip3 install .

Quickstart

Each computational experiment has to be bundled as a standalone python module. Important experiment parameters are placed at the top. Actual execution of the experiment is placed within the Experiment context manager.

Upon entering the context, a new archive folder for each run of the experiment is created.

Archiving of metadata, file artifacts and error handling is automatically managed on context exit.

# quickstart.py
"""
This doc string will be saved as the "description" meta data of the experiment records
"""
from pycomex.experiment import Experiment
from pycomex.util import Skippable

# Experiment parameters can simply be defined as uppercase global variables.
# These are automatically detected and can possibly be overwritten in command
# line invocation
HELLO = "hello "
WORLD = "world!"

# Experiment context manager needs 3 positional arguments:
# - Path to an existing folder in which to store the results
# - A namespace name unique for each experiment
# - access to the local globals() dict
with Skippable(), (e := Experiment("/tmp", "example/quickstart", globals())):

    # Internally saved into automatically created nested dict
    # {'strings': {'hello_world': '...'}}
    e["strings/hello_world"] = HELLO + WORLD

    # Alternative to "print". Message is printed to stdout as well as
    # recorded to log file
    e.info("some debug message")

    # Automatically saves text file artifact to the experiment record folder
    file_name = "hello_world.txt"
    e.commit_raw(file_name, HELLO + WORLD)
    # e.commit_fig(file_name, fig)
    # e.commit_png(file_name, image)
    # ...

# All the code inside this context will be copied to the "analysis.py"
# file which will be created as an experiment artifact.
with Skippable(), e.analysis:
    # And we can access all the internal fields of the experiment object
    # and the experiment parameters here!
    print(HELLO, WORLD)
    print(e['strings/hello_world'])
    # logging will print to stdout but not modify the log file
    e.info('analysis done')

This example would create the following folder structure:

tmp
|- results
   |- example
      |- 000
         |+ experiment_log.txt     # Contains all the log messages printed by experiment
         |+ experiment_meta.txt    # Meta information about the experiment
         |+ experiment_data.json   # All the data that was added to the internal exp. dict
         |+ hello_world.txt        # Text artifact that was committed to the experiment
         |+ snapshot.py            # Copy of the original experiment python module
         |+ analysis.py            # boilerplate code to get started with analysis of results

The analysis.py file is of special importance. It is created as a boilerplate starting place for additional code, which performs analysis or post processing on the results of the experiment. This can for example be used to transform data into a different format or create plots for visualization.

Specifically note these two aspects:

  1. The analysis file contains all of the code which was defined in the e.analysis context of the original experiment file! This code snippet is automatically transferred at the end of the experiment.

  2. The analysis file actually imports the snapshot copy of the original experiment file. This does not trigger the experiment to be executed again! The Experiment instance automatically notices that it is being imported and not explicitly executed. It will also populate all of it’s internal attributes from the persistently saved data in experiment_data.json, which means it is still possible to access all the data of the experiment without having to execute it again!

# analysis.py

# [...] imports omitted
# Importing the experiment itself
from snapshot import *

PATH = pathlib.Path(__file__).parent.absolute()
DATA_PATH = os.path.join(PATH, 'experiment_data.json')
# Load the all raw data of the experiment
with open(DATA_PATH, mode='r') as json_file:
    DATA: Dict[str, Any] = json.load(json_file)


if __name__ == '__main__':
    print('RAW DATA KEYS:')
    pprint(list(DATA.keys()))

    # ~ The analysis template from the experiment file
    # And we can access all the internal fields of the experiment object
    # and the experiment parameters here!
    print(HELLO, WORLD)
    print(e['strings/hello_world'])
    # logging will print to stdout but not modify the log file
    e.info('analysis done')

For more information and more interesting examples visit the Documentation: https://pycomex.readthedocs.io !

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pycomex-0.7.1.tar.gz (98.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pycomex-0.7.1-py3-none-any.whl (169.6 kB view details)

Uploaded Python 3

File details

Details for the file pycomex-0.7.1.tar.gz.

File metadata

  • Download URL: pycomex-0.7.1.tar.gz
  • Upload date:
  • Size: 98.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.1 CPython/3.10.6 Linux/5.15.0-58-generic

File hashes

Hashes for pycomex-0.7.1.tar.gz
Algorithm Hash digest
SHA256 6984fc3d9215f179aa394e60d3d9bfa7786498b5a9803e903d9ccf4a0d55aa54
MD5 40c3c3c312529950ea1bb9e12b20eeeb
BLAKE2b-256 d63cea9a9c3fe4bbeb366cfbd42461fd33789e2ced2e56590368f401d50b6caa

See more details on using hashes here.

File details

Details for the file pycomex-0.7.1-py3-none-any.whl.

File metadata

  • Download URL: pycomex-0.7.1-py3-none-any.whl
  • Upload date:
  • Size: 169.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.1 CPython/3.10.6 Linux/5.15.0-58-generic

File hashes

Hashes for pycomex-0.7.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e271965f10132df9d830a068a029d69f477ddd9ec8992d70692a7740f0afa4b8
MD5 6905e1346745ae2fbb66d4a43b4d465e
BLAKE2b-256 c2729f1a0682f2dd025ea940791ff60e1307061f11f2493ee0d9af0493ef7110

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page