Skip to main content

The client implementation for the Evochi project.

Project description

Evochi Python Client

Python

This project provides an easy-to-use Python client for the Evochi API such that users don't have to worry about the low-level networking details of the API protocol.

Installation

Install the package from PyPI:

pip install evochi

Or, if you want to install from (Git) source:

pip install "evochi @ git+https://github.com/neuro-soup/evochi.git/#subdirectory=clients/python"

Basic Usage

Minimal Example

The following example is a minimal example of how to use the Evochi Python client. It is assumed that the Evochi server is running on localhost:8080.

from dataclasses import dataclass
import random

import grpc.aio as grpc

import evochi.v1 as evochi


@dataclass
class State:
    # Shared information across all workers is stored here. The state is centrally
    # and synchronously updated at the end of each epoch.

    # The state might also contain configuration options for other workers, such
    # as `seed`, `learning_rate`, etc.

    # IMPORTANT: The state must be serializable using `pickle`. The state is only
    # sent once per epoch by a single worker and whenever a new worker joins
    # the training. However, the received state must be loaded using `pickle`
    # on each worker, which means that data structures, such as `torch.Tensor`,
    # that are stored on a GPU device must be moved to the CPU when initializing
    # and optimizing the state so that non-GPU workers can deserialize the
    # state.
    seed: float


class AwesomeWorker(evochi.Worker[State]):
    def __init__(self, channel: grpc.Channel, cores: int) -> None:
        super().__init__(channel, cores)

    def initialize(self) -> State:
        # This method is called on the first worker to join the training. Since
        # the server doesn't know anything about the state of the workers, the
        # first worker is responsible for initializing the state, which is then
        # broadcasted to all subsequent workers.
        # TODO: initialize state parameters of the model
        return State(seed=42)

    def evaluate(self, epoch: int, slices: list[slice]) -> list[evochi.Eval]:
        # This method is called whenever the server requests an evaluation step
        # for the current worker. The given slices represent the index ranges of
        # the population to be evaluated.
        #
        # Here, you can perform (an arbitrary number of) environment steps or
        # whatever you want to evaluate. It is possible that during a single
        # epoch, the same worker receives multiple evaluation requests.
        #
        # Note that the length of the slice (stop-start) must be equal to the
        # number of rewards in a single `evochi.Eval` object.
        # TODO: implement a proper evaluation step
        return [
            evochi.Eval(
                slice=slice,
                rewards=[
                    random.randint(-42, 42)
                    for _ in range(slice.start, slice.stop)
                ],
            )
            for slice in slices
        ]

    def optimize(self, epoch: int, rewards: list[float]) -> State:
        # This method is called at the end of each epoch. The accumulated rewards
        # of the total population are sent to all workers to perform an optimization
        # step, which is performed in this method.
        #
        # It makes sense that the workers' states must be equal, which is ensured
        # using a `seed` in the state. After the optimization step, a worker
        # is requested to send its state to the server, which is then used for
        # new workers to join the training.
        # TODO: update state parameters of the model
        return State(seed=self.state.seed)


async def main() -> None:
    # Create a gRPC channel to the server. Here, the evochi server is assumed to
    # be running on localhost:8080.
    channel = grpc.insecure_channel("localhost:8080")

    # The number of cores determines the max length of slices (stop-start) that
    # the server will send to the worker to evaluate. Of course, this must not
    # necessarily be equal to the number of cores in CPU/GPU.
    worker = AwesomeWorker(channel=channel, cores=5)

    await worker.start()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evochi-0.1.0.tar.gz (11.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

evochi-0.1.0-py3-none-any.whl (11.2 kB view details)

Uploaded Python 3

File details

Details for the file evochi-0.1.0.tar.gz.

File metadata

  • Download URL: evochi-0.1.0.tar.gz
  • Upload date:
  • Size: 11.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.4.30

File hashes

Hashes for evochi-0.1.0.tar.gz
Algorithm Hash digest
SHA256 2463f1c9bf5866cd32811e13782d50ec7524b486a1e8264f6a842ea378e2a2ee
MD5 209cd9c205a24f2bc3de1de66d3ddb4e
BLAKE2b-256 a6eaa8835c670856561f35c8bc3a7457d4c2c08e3928a4a309f22cd9ac29df46

See more details on using hashes here.

File details

Details for the file evochi-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: evochi-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 11.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.4.30

File hashes

Hashes for evochi-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 fe74b379c54675d4e56c8121721048234fdc487035f183ab380093c8fb880a14
MD5 12dc2a45d0c2da038b379ce777b7b6b5
BLAKE2b-256 f2f9da1675db8609b2a7e6fcab7d2d801b89ae01cb21a780a1bd740992a142ca

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page