Skip to main content

Inference server for RL

Project description

torchdemon

PyPI PyPI - Python Version PyPI - License Coookiecutter - Wolt


Documentation: https://jacknurminen.github.io/torchdemon

Source Code: https://github.com/jacknurminen/torchdemon

PyPI: https://pypi.org/project/torchdemon/


Inference Server for RL

Inference Server. Serve model on GPU to workers. Workers communicate with the inference server over multiprocessing Pipe connections.

Dynamic Batching. Accumulate batches from workers for forward passes. Set maximum batch size or maximum wait time for releasing batch for inference.

Installation

pip install torchdemon

Usage

Define a model

import torch

class Model(torch.nn.Module):
    def __init__(self, input_size: int, output_size: int):
        super(Model, self).__init__()
        self.linear = torch.nn.Linear(input_size, output_size)

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        return self.linear(x)

model = Model(8, 4)

Create an inference server for the model

import torchdemon

inference_server = torchdemon.InferenceServer(
    model, batch_size=8, max_wait_ns=1000000, device=torch.device("cuda:0")
)

Create an inference client per agent and run in parallel processes

import multiprocessing

processes = []
for _ in range(multiprocessing.cpu_count()):
    inference_client = inference_server.create_client()
    agent = Agent(inference_client)
    process = multiprocessing.Process(target=play, args=(agent,))
    process.start()
    processes.append(process)

Run server

inference_server.run()

for process in processes:
    process.join()

Development

  • Clone this repository
  • Requirements:
  • Create a virtual environment and install the dependencies
poetry install
  • Activate the virtual environment
poetry shell

Testing

pytest

Documentation

The documentation is automatically generated from the content of the docs directory and from the docstrings of the public signatures of the source code. The documentation is updated and published as a Github project page automatically as part each release.

Releasing

Trigger the Draft release workflow (press Run workflow). This will update the changelog & version and create a GitHub release which is in Draft state.

Find the draft release from the GitHub releases and publish it. When a release is published, it'll trigger release workflow which creates PyPI release and deploys updated documentation.

Pre-commit

Pre-commit hooks run all the auto-formatters (e.g. black, isort), linters (e.g. mypy, flake8), and other quality checks to make sure the changeset is in good shape before a commit/push happens.

You can install the hooks with (runs for each commit):

pre-commit install

Or if you want them to run only for each push:

pre-commit install -t pre-push

Or if you want e.g. want to run all checks manually for all files:

pre-commit run --all-files

This project was generated using the wolt-python-package-cookiecutter template.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchdemon-0.0.2.tar.gz (7.6 kB view details)

Uploaded Source

Built Distribution

torchdemon-0.0.2-py3-none-any.whl (7.1 kB view details)

Uploaded Python 3

File details

Details for the file torchdemon-0.0.2.tar.gz.

File metadata

  • Download URL: torchdemon-0.0.2.tar.gz
  • Upload date:
  • Size: 7.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.12 CPython/3.9.9 Linux/5.11.0-1025-azure

File hashes

Hashes for torchdemon-0.0.2.tar.gz
Algorithm Hash digest
SHA256 aadd631c429696ccfc93faae1c77f45baaef2c0ac5c794e9ec46e14729418ffa
MD5 672d410f1ba80ecca1a1d7533203f757
BLAKE2b-256 43d5fb17085bb9df2d34326fc537cf6be0da6c68cc04dc4c7953fc04439d26bc

See more details on using hashes here.

File details

Details for the file torchdemon-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: torchdemon-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 7.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.12 CPython/3.9.9 Linux/5.11.0-1025-azure

File hashes

Hashes for torchdemon-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 8d2d009f97dc6cb9bc1f72230cd525bae216c3c063ab764b236ea01e1cce3f4b
MD5 81d8288a45bb14f88ef0adab380767be
BLAKE2b-256 76f99af8877fe0265ec305f9b8b82ede16b6af3351ced66ec24792f4b8c3ea79

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page