Skip to main content

The client implementation for the Evochi project.

Project description

Evochi Python Client

Python

This project provides an easy-to-use Python client for the Evochi API such that users don't have to worry about the low-level networking details of the API protocol.

Installation

Install the package from PyPI:

pip install evochi

Or, if you want to install from (Git) source:

pip install "evochi @ git+https://github.com/neuro-soup/evochi.git/#subdirectory=clients/python"

Basic Usage

Minimal Example

The following example is a minimal example of how to use the Evochi Python client. It is assumed that the Evochi server is running on localhost:8080.

from dataclasses import dataclass
import random

import grpc.aio as grpc

import evochi.v1 as evochi


@dataclass
class State:
    # Shared information across all workers is stored here. The state is centrally
    # and synchronously updated at the end of each epoch.

    # The state might also contain configuration options for other workers, such
    # as `seed`, `learning_rate`, etc.

    # IMPORTANT: The state must be serializable using `pickle`. The state is only
    # sent once per epoch by a single worker and whenever a new worker joins
    # the training. However, the received state must be loaded using `pickle`
    # on each worker, which means that data structures, such as `torch.Tensor`,
    # that are stored on a GPU device must be moved to the CPU when initializing
    # and optimizing the state so that non-GPU workers can deserialize the
    # state.
    seed: float


class AwesomeWorker(evochi.Worker[State]):
    def __init__(self, channel: grpc.Channel, cores: int) -> None:
        super().__init__(channel, cores)

    def initialize(self) -> State:
        # This method is called on the first worker to join the training. Since
        # the server doesn't know anything about the state of the workers, the
        # first worker is responsible for initializing the state, which is then
        # broadcasted to all subsequent workers.
        # TODO: initialize state parameters of the model
        return State(seed=42)

    def evaluate(self, epoch: int, slices: list[slice]) -> list[evochi.Eval]:
        # This method is called whenever the server requests an evaluation step
        # for the current worker. The given slices represent the index ranges of
        # the population to be evaluated.
        #
        # Here, you can perform (an arbitrary number of) environment steps or
        # whatever you want to evaluate. It is possible that during a single
        # epoch, the same worker receives multiple evaluation requests.
        #
        # Note that the length of the slice (stop-start) must be equal to the
        # number of rewards in a single `evochi.Eval` object.
        # TODO: implement a proper evaluation step
        return [
            evochi.Eval(
                slice=slice,
                rewards=[
                    random.randint(-42, 42)
                    for _ in range(slice.start, slice.stop)
                ],
            )
            for slice in slices
        ]

    def optimize(self, epoch: int, rewards: list[float]) -> State:
        # This method is called at the end of each epoch. The accumulated rewards
        # of the total population are sent to all workers to perform an optimization
        # step, which is performed in this method.
        #
        # It makes sense that the workers' states must be equal, which is ensured
        # using a `seed` in the state. After the optimization step, a worker
        # is requested to send its state to the server, which is then used for
        # new workers to join the training.
        # TODO: update state parameters of the model
        return State(seed=self.state.seed)


async def main() -> None:
    # Create a gRPC channel to the server. Here, the evochi server is assumed to
    # be running on localhost:8080.
    channel = grpc.insecure_channel("localhost:8080")

    # The number of cores determines the max length of slices (stop-start) that
    # the server will send to the worker to evaluate. Of course, this must not
    # necessarily be equal to the number of cores in CPU/GPU.
    worker = AwesomeWorker(channel=channel, cores=5)

    await worker.start()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evochi-0.1.1.tar.gz (11.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

evochi-0.1.1-py3-none-any.whl (11.2 kB view details)

Uploaded Python 3

File details

Details for the file evochi-0.1.1.tar.gz.

File metadata

  • Download URL: evochi-0.1.1.tar.gz
  • Upload date:
  • Size: 11.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.6

File hashes

Hashes for evochi-0.1.1.tar.gz
Algorithm Hash digest
SHA256 63ac6f36954f33ac920edbfd29ce10b9352be22764edd038dc56cfde944fb6bd
MD5 5d8c52b740667f767576e09efefbcce0
BLAKE2b-256 3bba2de27e8fb80f27928f146c5c9f4e64e76726b6bbe6c8fc3d144d1a995068

See more details on using hashes here.

File details

Details for the file evochi-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: evochi-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 11.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.6

File hashes

Hashes for evochi-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 491405bcb4b7b509f1800d6230cf99ba98b4cf83942479044ae10452a4ee2ddc
MD5 9853b63c9991da063a304c7552f3a799
BLAKE2b-256 d3e1b9c4768ed9d3bebff8499153f947db3d5c4c7c48b49cf38a01489968dfdb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page