Skip to main content

Pulsing: Backbone for distributed AI systems. Actor runtime with streaming, zero dependencies, and built-in discovery.

Project description

Pulsing

CI License Python 3.10+ Rust

中文文档

Backbone for distributed AI systems.

Actor runtime. Streaming-first. Zero dependencies. Built-in discovery.

Pulsing is a distributed actor runtime built in Rust, designed for Python. Connect AI agents and services across machines — no Redis, no etcd, no YAML. Just pip install pulsing.

🚀 Zero Dependencies — Pure Rust + Tokio, no NATS/etcd/Redis

Streaming-first — Native support for streaming responses, built for LLM token generation

🌐 Built-in Discovery — SWIM/Gossip protocol for automatic cluster management

🔀 Same API Everywhere — Same await actor.method() for local and remote Actors

🚀 Get Started in 5 Minutes

Installation

pip install pulsing

Your First Multi-Agent Application

import asyncio
import pulsing as pul
from pulsing.agent import runtime

@pul.remote
class Greeter:
    def __init__(self, display_name: str):
        self.display_name = display_name

    def greet(self, message: str) -> str:
        return f"[{self.display_name}] Received: {message}"

    async def chat_with(self, peer_name: str, message: str) -> str:
        # Use Greeter.resolve() to get a typed proxy
        peer = await Greeter.resolve(peer_name)
        return await peer.greet(f"From {self.display_name}: {message}")

async def main():
    async with runtime():
        # Create two agents
        alice = await Greeter.spawn(display_name="Alice", name="alice")
        bob = await Greeter.spawn(display_name="Bob", name="bob")

        # Agent communication
        reply = await alice.chat_with("bob", "Hello!")
        print(reply)  # [Bob] Received: From Alice: Hello!

asyncio.run(main())

That's it! @pul.remote turns a regular class into a distributed Actor, and Greeter.resolve() enables agents to discover and communicate with each other.

💡 I want to...

Scenario Example Description
Quick start examples/quickstart/ Get started in 10 lines
Multi-Agent collaboration examples/agent/pulsing/ AI debate, brainstorming, role-playing
Distributed LLM inference pulsing actor router/vllm GPU cluster inference service
Integrate AutoGen examples/agent/autogen/ One line to go distributed
Integrate LangGraph examples/agent/langgraph/ Execute graphs across nodes

🎯 Core Capabilities

1. Multi-Agent Collaboration

Multiple AI Agents working in parallel and communicating:

from pulsing.agent import agent, runtime, llm

@agent(role="Researcher", goal="Deep analysis")
class Researcher:
    async def analyze(self, topic: str) -> str:
        client = await llm()
        return await client.ainvoke(f"Analyze: {topic}")

@agent(role="Reviewer", goal="Evaluate proposals")
class Reviewer:
    async def review(self, proposal: str) -> str:
        client = await llm()
        return await client.ainvoke(f"Review: {proposal}")

async with runtime():
    researcher = await Researcher.spawn(name="researcher")
    reviewer = await Reviewer.spawn(name="reviewer")

    # Parallel work and collaboration
    analysis = await researcher.analyze("AI trends")
    feedback = await reviewer.review(analysis)
# Run MBTI personality discussion example
python examples/agent/pulsing/mbti_discussion.py --mock --group-size 6

# Run parallel idea generation example
python examples/agent/pulsing/parallel_ideas_async.py --mock --n-ideas 5

2. One Line to Distributed

Develop locally, scale seamlessly to clusters:

# Standalone mode (development)
async with runtime():
    agent = await MyAgent.spawn(name="agent")

# Distributed mode (production) — just add address
async with runtime(addr="0.0.0.0:8001"):
    agent = await MyAgent.spawn(name="agent")

# Other nodes auto-discover
async with runtime(addr="0.0.0.0:8002", seeds=["node1:8001"]):
    agent = await resolve("agent")  # Cross-node transparent call

3. LLM Inference Service

Out-of-the-box GPU cluster inference:

# Start Router (OpenAI-compatible API)
pulsing actor pulsing.actors.Router --addr 0.0.0.0:8000 --http_port 8080 --model_name my-llm

# Start vLLM Worker (can have multiple)
pulsing actor pulsing.actors.VllmWorker --model Qwen/Qwen2.5-0.5B --addr 0.0.0.0:8002 --seeds 127.0.0.1:8000

# Test
curl http://localhost:8080/v1/chat/completions \
  -d '{"model": "my-llm", "messages": [{"role": "user", "content": "Hello"}]}'

4. Agent Framework Integration

Have existing AutoGen/LangGraph code? One-line migration:

# AutoGen: Replace runtime
from pulsing.autogen import PulsingRuntime
runtime = PulsingRuntime(addr="0.0.0.0:8000")

# LangGraph: Wrap the graph
from pulsing.langgraph import with_pulsing
distributed_app = with_pulsing(app, seeds=["gpu-server:8001"])

📚 Example Guide

examples/
├── quickstart/              # ⭐ 5-minute quickstart
│   └── hello_agent.py       #    First Agent
├── agent/
│   ├── pulsing/             # ⭐⭐ Multi-Agent apps
│   │   ├── mbti_discussion.py      # MBTI personality discussion
│   │   └── parallel_ideas_async.py # Parallel idea generation
│   ├── autogen/             # AutoGen integration
│   └── langgraph/           # LangGraph integration
├── python/                  # ⭐⭐ Basic examples
│   ├── ping_pong.py         #    Actor basics
│   ├── cluster.py           #    Cluster communication
│   └── ...
└── rust/                    # Rust examples

🔧 Technical Features

  • Zero external dependencies: Pure Rust + Tokio, no NATS/etcd/Redis needed
  • Gossip protocol: Built-in SWIM protocol for node discovery and failure detection
  • Location transparency: Same API for local and remote Actors
  • Streaming messages: Native support for streaming requests/responses (LLM-ready)
  • Type safety: Rust Behavior API provides compile-time message type checking

📦 Project Structure

Pulsing/
├── crates/                   # Rust core
│   ├── pulsing-actor/        #   Actor System
│   └── pulsing-py/           #   Python bindings
├── python/pulsing/           # Python package
│   ├── actor/                #   Actor API
│   ├── agent/                #   Agent toolkit
│   ├── autogen/              #   AutoGen integration
│   └── langgraph/            #   LangGraph integration
├── examples/                 # Example code
└── docs/                     # Documentation

🛠️ Development

# Development build
maturin develop

# Run tests
pytest tests/python/
cargo test --workspace

📄 License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

pulsing-0.1.2-cp310-abi3-win_amd64.whl (4.6 MB view details)

Uploaded CPython 3.10+Windows x86-64

pulsing-0.1.2-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.4 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ x86-64

pulsing-0.1.2-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (5.3 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

pulsing-0.1.2-cp310-abi3-macosx_11_0_arm64.whl (5.0 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

pulsing-0.1.2-cp310-abi3-macosx_10_12_x86_64.whl (5.2 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

File details

Details for the file pulsing-0.1.2-cp310-abi3-win_amd64.whl.

File metadata

  • Download URL: pulsing-0.1.2-cp310-abi3-win_amd64.whl
  • Upload date:
  • Size: 4.6 MB
  • Tags: CPython 3.10+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pulsing-0.1.2-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 993af3d2faa27152f674cd4656ba5c873bfb31e2650143b2b49b36f97bb0f35d
MD5 28feca374c9ee206e28b4915cc681e2c
BLAKE2b-256 8d5e8ad95cb7a3a8f02d8fbd5eaae22921a7be1aa8aa6bf6669af5eac0880e3f

See more details on using hashes here.

Provenance

The following attestation bundles were made for pulsing-0.1.2-cp310-abi3-win_amd64.whl:

Publisher: pypi.yml on DeepLink-org/Pulsing

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pulsing-0.1.2-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for pulsing-0.1.2-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 54296b794cd1642f4d7401b07f16ecd02155190f57d19aaa5384c16fe3e68dfa
MD5 037ad3cca4b3b81df6a0a5c0f92edc70
BLAKE2b-256 aec615f76e1663331d737dd717dfc9bc238fd021f55082628a232c5b024b27dc

See more details on using hashes here.

Provenance

The following attestation bundles were made for pulsing-0.1.2-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: pypi.yml on DeepLink-org/Pulsing

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pulsing-0.1.2-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pulsing-0.1.2-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 bc6e7f5b93ebe0cc7ec975e33b7e9f0698d799864ed21e27db327214eeab60ed
MD5 025c9a1cce02f8cf8f96a56036df6f09
BLAKE2b-256 3bef980bb1b0402c3154c682e05860f65d694cbcb35a891ce9530a9c3cca0dea

See more details on using hashes here.

Provenance

The following attestation bundles were made for pulsing-0.1.2-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl:

Publisher: pypi.yml on DeepLink-org/Pulsing

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pulsing-0.1.2-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for pulsing-0.1.2-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 8779fee5b3b1c147e89ac7b316794abb76e1c5651d752504bb43ccc30745655c
MD5 84badfb0ea5a6654c0fb412b86dff2b6
BLAKE2b-256 135130ba9265f0edbd86d631ea5b6f8a285a7e59f05db2e8c3d9e9ffdfe3decd

See more details on using hashes here.

Provenance

The following attestation bundles were made for pulsing-0.1.2-cp310-abi3-macosx_11_0_arm64.whl:

Publisher: pypi.yml on DeepLink-org/Pulsing

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pulsing-0.1.2-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for pulsing-0.1.2-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 7126837ebf51a95ff4e49e482bdbc89cbbe0b63028f44f8dbb01736401e5064d
MD5 5852c6de59ce12115d0a37c61da63410
BLAKE2b-256 dc31de0fdee0df6cc484212793feaf6eee134f35e9c747493b8dde0fb2f76cbd

See more details on using hashes here.

Provenance

The following attestation bundles were made for pulsing-0.1.2-cp310-abi3-macosx_10_12_x86_64.whl:

Publisher: pypi.yml on DeepLink-org/Pulsing

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page