Skip to main content

A Python SDK for building high-performance, asynchronous batch processing operators

Project description

SandAI Operator SDK

Python framework for developing data operators under the Dataflow architecture. Part of the SandAI Data Project's three-layer separation design.

Overview

The Operator SDK provides the foundation for building data processing operators that form the Dataflow layer in the SandAI architecture. These operators are atomic, reusable components that can be composed into complex pipelines and workflows.

Features

  • Asynchronous Batch Processing: Concurrent processing with configurable batch size and concurrency
  • Smart File Monitoring: Real-time file change detection with vim editor compatibility
  • Task Working Directories: Isolated working directories for each task
  • Error Recovery: Automatic handling of file operations and network interruptions
  • Standardized Interface: Consistent operator lifecycle and API design
  • Celery Integration: Built-in support for distributed task execution

Installation

cd operator-sdk
pip install -e .
conda create -n sandai-operator python=3.14=h0369b99_1_cp314t -c conda-forge

Quick Start

from sandai.operator import BatchProcessor, TaskInput, TaskOutput
from pydantic import BaseModel
from typing import List, Generator

class Options(BaseModel):
    param: str = "default"

class Results(BaseModel):
    output: str

processor = BatchProcessor(name="my-processor", version="1.0.0")

@processor.on_batch(\n    max_concurrency=4,\n    max_batch_size=8,\n    prepare_concurrency=4,\n    output_concurrency=4,\n)
def process_batch(
    batch_inputs: List[TaskInput[Options]], 
    operator_config: dict,
    context
) -> Generator[TaskOutput[Results], None, None]:
    
    for task_input in batch_inputs:
        # Get task working directory
        workdir = context.get_task_workdir(task_input.task_id)
        
        # Your processing logic here
        result = Results(output=f"processed-{task_input.options.param}")
        
        yield TaskOutput[Results](
            task_id=task_input.task_id,
            results=result,
            status="success"
        )

if __name__ == "__main__":
    processor.run()

prepare_concurrencyoutput_concurrency 默认会继承 max_concurrency,因此不配置时行为与旧版本一致。当前实现里,prepare 下载/输入转换、output 上传/清理、channel pull/push 已经使用独立 executor;因此可以单独提高 prepare_concurrencyoutput_concurrency,而不是都挤在同一个 IO 池里竞争。

Core Components

  • BatchProcessor: Asynchronous batch processor with configurable concurrency
  • FileChannel: File monitoring with real-time change detection
  • ProcessingContext: Task-level working directory management
  • CeleryChannel: Distributed task execution via Celery

Architecture Integration

This SDK enables the Dataflow layer of the SandAI architecture:

  • Operators built with this SDK are deployed in the operators/ directory
  • Pipelines in the pipelines/ directory compose these operators
  • Workflows in the workflows/ directory orchestrate complete business processes

Example Operators

See the operators/ directory for complete implementations:

  • video-clipper/: Video processing operator
  • data-transformer/: Data format conversion operator

Testing

make test          # Run all tests
make test-sdk      # Run SDK core tests

Supervisor CLI

operator-sdk provides sdrun for launching multiple identical worker processes, aggregating logs, forwarding signals, and supervising worker lifecycle policies.

sdrun -w 4 --restart always -- python main.py -j --mode file
  • -w / --worker: number of worker processes to launch, default 1
  • --restart never: default; do not restart workers after a non-zero exit
  • --restart always: always restart a worker after a non-zero exit
  • --restart N: restart a worker at most N times after non-zero exits
  • --success-exit ignore: default; when a worker exits with code 0, do not affect other workers
  • --success-exit shutdown: when a worker exits with code 0, stop the remaining workers
  • --failure-exit ignore: default; when a worker exits non-zero and will not be restarted, do not affect other workers
  • --failure-exit shutdown: when a worker exits non-zero and will not be restarted, stop the remaining workers and return that worker's exit code
  • --startup-stagger SECONDS: sequential startup delay, default 0; for example 0.5 starts worker-1 after 0.5s and worker-2 after 1.0s

Policy model:

  • --restart only controls whether the exited worker itself should be restarted after a non-zero exit.
  • --success-exit controls whether a clean exit from one worker should stop the rest.
  • --failure-exit controls whether a non-zero exit from one worker, once no more restarts apply, should stop the rest.
  • If all workers eventually exit without supervisor-forced shutdown, sdrun exits with the sum of all final worker exit codes.
  • If --failure-exit shutdown is used, sdrun exits with the first non-restarted failing worker's exit code.
  • SIGTERM, SIGINT, SIGHUP, and SIGQUIT received by sdrun are forwarded to all workers.
  • Logs are prefixed with worker identity, for example [worker-2#1][stdout] ....
  • On POSIX, sdrun prefers CaoE to keep child processes tied to the parent lifecycle; if CaoE is unavailable it falls back to a dedicated process-group strategy.
  • Child processes receive SDRUN_MODE=true, SDRUN_WORLD_SIZE, SDRUN_RANK, and SDRUN_LOCAL_RANK.

Common combinations:

  • Independent workers: --restart never --success-exit ignore --failure-exit ignore
  • Fail-fast workers: --restart never --success-exit ignore --failure-exit shutdown
  • Elastic recovery on failures: --restart always --success-exit ignore --failure-exit shutdown
  • First clean completion wins: --restart never --success-exit shutdown --failure-exit shutdown

BatchProcessor.run() also installs the parent-death guard by default on POSIX via caoe.install(). Pass install_parent_guard=False to disable that behavior when embedding the processor in another lifecycle manager.

If sdrun causes GPU memory usage to explode because multiple worker processes each hold their own copy of large tensors or model weights, consider using shared-tensor to share those tensors across processes: https://github.com/world-sim-dev/shared-tensor. This is especially useful for single-GPU, multi-process inference when the model runtime is not thread-safe and threads cannot be used safely.

FileChannel With SDRUN

When workers are launched by sdrun and the operator runs in file mode:

  • FileChannel shards input lines by line index using line_index % SDRUN_WORLD_SIZE == SDRUN_RANK.
  • Each worker processes only the JSONL rows assigned to its rank.
  • Output files are renamed by inserting the rank before the extension, for example output.jsonl becomes output.0.jsonl and output.1.jsonl.
  • If the output file has no extension, the rank suffix is appended directly to the filename.

This means sdrun -w 4 -- python main.py --mode file ... produces 4 parallel output files that must be merged by the caller if a single combined result is needed.

Development

Setup Local Minio

brew install minio/stable/minio
brew install minio/stable/mc
minio server var/minio

Setup Local Redis

brew install redis
brew services start redis

Setup Local Postgres

brew install postgresql
brew services start postgresql

List Services

brew services list

Creating New Operators

  1. Create operator directory in ../operators/my-operator/
  2. Implement using this SDK
  3. Deploy as Celery service
  4. Use in pipelines and workflows

Best Practices

  • Keep operators focused on single responsibilities
  • Use proper error handling and logging
  • Implement comprehensive tests
  • Document operator interfaces clearly

License

MIT License

打包和上传

make build ossutil cp dist/sandai_operator_sdk-0.2.7-py3-none-any.whl oss://python-artifacts/ -e oss-cn-shanghai.aliyuncs.com --acl public-read

本地开发安装

pip install -e /path/to/operator-sdk

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sandai_operator_sdk-0.4.0.tar.gz (67.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sandai_operator_sdk-0.4.0-py3-none-any.whl (42.1 kB view details)

Uploaded Python 3

File details

Details for the file sandai_operator_sdk-0.4.0.tar.gz.

File metadata

  • Download URL: sandai_operator_sdk-0.4.0.tar.gz
  • Upload date:
  • Size: 67.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for sandai_operator_sdk-0.4.0.tar.gz
Algorithm Hash digest
SHA256 288b6b5f7a5bc89e213ae1d71c4996780b2ead0bbd80a6a5c07839b5a7cde441
MD5 95e8d880955799d6570cf25a1bfb54ef
BLAKE2b-256 a91503bb39a4d6cf5c02555effb6febc588e0cc5eb2f0bfd046fa1e1a17f4f55

See more details on using hashes here.

File details

Details for the file sandai_operator_sdk-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for sandai_operator_sdk-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d55b5303805fcd85297a71d9af29f1621babcad65b12ff00102324398e1959d3
MD5 8229c936bc634152ad7230f89f14261b
BLAKE2b-256 58efa22852c6cf67d6f08beb47f35ced88c282d25b2f25f2120e5063a12e8d22

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page