Inference server for RL
Project description
torchdemon
Documentation: https://jacknurminen.github.io/torchdemon
Source Code: https://github.com/jacknurminen/torchdemon
PyPI: https://pypi.org/project/torchdemon/
Inference Server for RL
Inference Server. Serve model on GPU to workers. Workers communicate with the inference server over multiprocessing Pipe connections.
Dynamic Batching. Accumulate batches from workers for forward passes. Set maximum batch size or maximum wait time for releasing batch for inference.
Installation
pip install torchdemon
Usage
Define a model
import torch
class Model(torch.nn.Module):
def __init__(self, input_size: int, output_size: int):
super(Model, self).__init__()
self.linear = torch.nn.Linear(input_size, output_size)
def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.linear(x)
model = Model(8, 4)
Create an inference server for the model
import torchdemon
inference_server = torchdemon.InferenceServer(
model, batch_size=8, max_wait_ns=1000000, device=torch.device("cuda:0")
)
Create an inference client per agent and run in parallel processes
import multiprocessing
processes = []
for _ in range(multiprocessing.cpu_count()):
inference_client = inference_server.create_client()
agent = Agent(inference_client)
process = multiprocessing.Process(target=play, args=(agent,))
process.start()
processes.append(process)
Run server
inference_server.run()
for process in processes:
process.join()
Development
- Clone this repository
- Requirements:
- Poetry
- Python 3.7+
- Create a virtual environment and install the dependencies
poetry install
- Activate the virtual environment
poetry shell
Testing
pytest
Documentation
The documentation is automatically generated from the content of the docs directory and from the docstrings of the public signatures of the source code. The documentation is updated and published as a Github project page automatically as part each release.
Releasing
Trigger the Draft release workflow (press Run workflow). This will update the changelog & version and create a GitHub release which is in Draft state.
Find the draft release from the GitHub releases and publish it. When a release is published, it'll trigger release workflow which creates PyPI release and deploys updated documentation.
Pre-commit
Pre-commit hooks run all the auto-formatters (e.g. black
, isort
), linters (e.g. mypy
, flake8
), and other quality
checks to make sure the changeset is in good shape before a commit/push happens.
You can install the hooks with (runs for each commit):
pre-commit install
Or if you want them to run only for each push:
pre-commit install -t pre-push
Or if you want e.g. want to run all checks manually for all files:
pre-commit run --all-files
This project was generated using the wolt-python-package-cookiecutter template.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file torchdemon-0.0.3.tar.gz
.
File metadata
- Download URL: torchdemon-0.0.3.tar.gz
- Upload date:
- Size: 7.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.12 CPython/3.9.9 Linux/5.11.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 32e3962efcf52dc3e8805b63cd73d135e19206c908eaac4efc5ac4052d5bc24d |
|
MD5 | 737df8ded60e1ac9003d503e9f9087a0 |
|
BLAKE2b-256 | 7629a8f5c2837658fdd8f32b1cfb6ee9bc8767905d4c768fc68262c7358ed2f2 |
File details
Details for the file torchdemon-0.0.3-py3-none-any.whl
.
File metadata
- Download URL: torchdemon-0.0.3-py3-none-any.whl
- Upload date:
- Size: 7.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.12 CPython/3.9.9 Linux/5.11.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d565be080344943cc57ce35d0ca63fda4540337e36ec3948aabea05996d8634 |
|
MD5 | 267cc21a2dea7d1475fa2ec41f6e1825 |
|
BLAKE2b-256 | 24bd78965ad44948800e5dd9eb67779c92039f3e546c4f818a0211dbf3e6a8fd |