Skip to main content

(deprecated for libactor) An actor architecture for research software

Project description

ream PyPI Python

A simple actor architecture for your research project. It helps addressing three problems so that you can focus on your main research:

  1. Configuring hyper-parameters of your method
  2. Speed-up the feedback cycles via easy & smart caching
  3. Running each step in your method independently.

It's more powerful to combine with osin.

Introduction

Let's say you are developing a method, an algorithm, or a pipeline to solve a problem. In many cases, it can be viewed as a computational graph. So why not structure your code as a computational graph, where each node is a component in your method or a step in your pipeline? It made your code more modular, and easy to release, cache, and evaluate. To see how we can apply this architecture, let's take a look at a record linkage project (linking entities in a table). A record linkage system typically has the following steps:

  1. Generate candidate entities in a table
  2. Rank the candidate entities and select the best matches.

So naturally, we will have two actors for two steps: CandidateGeneration and CandidateRanking:

import pandas as pd
from typing import Literal
from ream.prelude import BaseActor
from dataclasses import dataclass

@dataclass
class CanGenParams:
    # type of query that will be sent to ElasticSearch
    query_type: Literal["exact-match", "fuzzy-match"]

class CandidateGeneration(BaseActor[pd.DataFrame, CanGenParams]):
    VERSION = 100

    def run(self, table: pd.DataFrame):
        # generate candidate entities of the given table
        ...

@dataclass
class CanRankParams:
    # ranking method to use
    rank_method: Literal["pairwise", "columnwise"]

class CandidateRanking(BaseActor[pd.DataFrame, CanRankParams]):
    VERSION = 100

    def __init__(self, params: CanRankParams, cangen_actor: CandidateGeneration):
        super().__init__(params, [cangen_actor])

    def run(self, table: pd.DataFrame):
        # rank candidate entities of the given table
        ...

The two actors make the code more modular and closer to releasable quality. To define the linking pipeline, we can use ActorGraph:

from ream.prelude import ActorGraph, ActorNode, ActorEdge

g = ActorGraph()
cangen = g.add_node(ActorNode.new(CandidateGeneration))
canrank = g.add_node(ActorNode.new(CandidateRanking))
g.add_edge(BaseEdge(id=-1, source=cangen, target=canrank))

If we provide type hints for arguments of actors, as in the examples above, you can automatically construct the graph by given the actor classes.

from ream.prelude import ActorGraph

g = ActorGraph.auto([CandidateGeneration, CandidateRanking])

This seems boring and does not offer much, but then you can pick whatever actor and its function you want to call without manually initializing and parsing command line arguments. For example, we want to trigger the evaluate method on each actor. The parameters of the actors will be obtained automatically from the command line arguments, thanks to the yada parser.

if __name__ == "__main__":
    g.run(actor_class="CandidateGeneration", actor_method="evaluate")

The evaluate method for each actor can be very useful. On the candidate generation actor, it can tell us the upper bound accuracy of our method so we know whether we need to improve the candidate generation or candidate ranking. If a dataset actor is introduced to the computational graph as demonstrated below, its evaluate method can tell us statistics about the dataset.

from ream.prelude import NoParams, BaseActor, DatasetQuery

class DatasetActor(BaseActor[str, NoParams]):
    VERSION = 100

    def run(self, query: str):
        # use a query so we can dynamically select a subset of the dataset for quickly test
        # for example: mnist[:10] -- select first 10 examples
        dsquery = DatasetQuery.from_string(query)

        # load the real dataset
        examples = ...
        return dsquery.select(examples)

    def evaluate(self, query: str):
        dsdict = self.run(query)
        for split, examples in dsdict.items():
            print(f"Dataset: {dsdict.name} - split {split} has {len(examples)} examples")

Let's talk about caching. Each actor when running will be uniquely identified by its name, version, and parameters (including the dependent actor parameters), and this is referred to as actor state which you can retrieve from BaseActor.get_actor_state function. From this, we can create a unique folder associated with that state that you can use to store your cache data (the folder can be retrieved from the function BaseActor.get_working_fs). Whenever the actor's dependency is updated, you will always get a new folder so no worry about managing the cache yourself! To set it up, in the file that defines the actor graph, init the ream workspace as follows:

from ream.prelude import ReamWorkspace, ActorGraph

ReamWorkspace.init("<folder>/<to>/<store>/<cache>")
g = ActorGraph()
...

Installation

pip install ream2  # not ream

Examples

Will be added later.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ream2-4.5.1.tar.gz (50.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ream2-4.5.1-py3-none-any.whl (58.2 kB view details)

Uploaded Python 3

File details

Details for the file ream2-4.5.1.tar.gz.

File metadata

  • Download URL: ream2-4.5.1.tar.gz
  • Upload date:
  • Size: 50.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.12.4 Darwin/23.6.0

File hashes

Hashes for ream2-4.5.1.tar.gz
Algorithm Hash digest
SHA256 6fef2656e1ff2f974e5ec5cd9c3ae0699f9602d0206267d1b05b8c88db9391ab
MD5 d9e9759e26d77b23a5a5735563c2ae2b
BLAKE2b-256 92f801a8655ce4c74b123ad92bd450ed37abd75874551a8ee0b320ac1f3b12b6

See more details on using hashes here.

File details

Details for the file ream2-4.5.1-py3-none-any.whl.

File metadata

  • Download URL: ream2-4.5.1-py3-none-any.whl
  • Upload date:
  • Size: 58.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.12.4 Darwin/23.6.0

File hashes

Hashes for ream2-4.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6f3b6fc2403bcbf99f5ba631bab795edbf4628cccbf3d243ac9f531c3b6d4cc9
MD5 d97bb5329fac9b8c92b5706d3754fbfb
BLAKE2b-256 a1f353bdc6c90b985c33934499b8220ca8c9b98901105ba8a2924655dc237e3a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page