Skip to main content

Ceylon: A Rust-based agent mesh framework for building local and distributed AI agent systems

Project description

Ceylon Python Bindings

Python bindings for Ceylon, a Rust-based agent mesh framework for building local and distributed AI agent systems.

Overview

Ceylon provides a unified API for creating agent-based systems that work seamlessly in both local (in-memory) and distributed (network-based) scenarios. The Python bindings allow you to build sophisticated agent systems using clean Python code while leveraging Rust's performance and safety.

Features

  • ๐Ÿค– Custom Agents: Create agents with synchronous message handlers
  • ๐Ÿง  LLM Integration: Built-in support for LLM agents (Ollama, OpenAI, etc.)
  • โšก Async Support: Concurrent LLM operations with send_message_async()
  • ๐Ÿ› ๏ธ Actions/Tools: Define custom actions with automatic schema generation
  • ๐ŸŒ Mesh Architecture: Local and distributed agent communication
  • ๐Ÿ“Š Metrics & Monitoring: Built-in metrics for performance, costs, and errors
  • ๐Ÿ Pythonic API: Fluent builder patterns and decorators

Installation

cd bindings/python
pip install -e .

Quick Start

Simple Agent

from ceylon import Agent, PyLocalMesh

class EchoAgent(Agent):
    def on_message(self, message, context=None):
        print(f"Received: {message}")
        return f"Echo: {message}"

# Create mesh and agent
mesh = PyLocalMesh("my_mesh")
agent = EchoAgent("echo")
mesh.add_agent(agent)

# Send message
mesh.send_to("echo", "Hello!")

LLM Agent (Synchronous)

from ceylon import LlmAgent

# Create and configure
agent = LlmAgent("assistant", "ollama::gemma3:latest")
agent.with_system_prompt("You are a helpful assistant.")
agent.with_temperature(0.7)
agent.with_max_tokens(100)
agent.build()

# Send message
response = agent.send_message("What is 2+2?")
print(response)

LLM Agent (Async)

import asyncio
from ceylon import LlmAgent

async def main():
    agent = LlmAgent("assistant", "ollama::gemma3:latest")
    agent.build()

    # Concurrent queries
    tasks = [
        agent.send_message_async("What is 2+2?"),
        agent.send_message_async("What is 3+3?"),
        agent.send_message_async("What is 5+5?"),
    ]

    responses = await asyncio.gather(*tasks)
    for response in responses:
        print(response)

asyncio.run(main())

Custom Actions

from ceylon import Agent

class CalculatorAgent(Agent):
    def __init__(self, name):
        super().__init__(name)

    @Agent.action(name="add")
    def add(self, a: int, b: int) -> int:
        """Add two numbers"""
        return a + b

    @Agent.action(name="multiply")
    def multiply(self, a: int, b: int) -> int:
        """Multiply two numbers"""
        return a * b

# Create agent
agent = CalculatorAgent("calc")

# Invoke actions
result = agent.tool_invoker.invoke("add", '{"a": 5, "b": 3}')
print(result)  # 8

Metrics and Monitoring

Ceylon includes built-in metrics collection for monitoring performance, costs, and errors:

import ceylonai_next as ceylon

# Run your agents...
# mesh.send_to("agent", "message")

# Get metrics snapshot
metrics = ceylon.get_metrics()

# Available metrics
print(f"Messages processed: {metrics['message_throughput']}")
print(f"Avg latency: {metrics['avg_message_latency_us']/1000:.2f} ms")
print(f"LLM tokens used: {metrics['total_llm_tokens']}")
print(f"LLM cost: ${metrics['total_llm_cost_us']/1_000_000:.4f}")
print(f"Memory hit rate: {metrics['memory_hits']/(metrics['memory_hits']+metrics['memory_misses'])*100:.1f}%")
print(f"Errors: {metrics['errors']}")

Key Metrics:

  • message_throughput - Total messages processed
  • avg_message_latency_us - Average message latency (microseconds)
  • avg_agent_execution_time_us - Average agent execution time (microseconds)
  • total_llm_tokens - Total LLM tokens consumed
  • avg_llm_latency_us - Average LLM API latency (microseconds)
  • total_llm_cost_us - Total LLM cost in micro-dollars ($1 = 1,000,000 ฮผ$)
  • memory_hits/memory_misses/memory_writes - Memory operation counts
  • errors - Dictionary of error types and counts

See examples/README_METRICS.md for detailed examples.

Examples

Example scripts are located in the examples/ directory, and tests are in the tests/ directory.

Basic Examples

  • examples/demo_simple_agent.py - Basic agent with synchronous message handling

    python examples/demo_simple_agent.py
    
  • examples/demo_agent_mesh_local.py โญ NEW - Local mesh networking with multiple agents

    python examples/demo_agent_mesh_local.py
    

    Demonstrates:

    • Creating a local mesh network (PyLocalMesh)
    • Adding multiple custom agents to the mesh
    • Direct agent-to-agent messaging
    • Message routing patterns
    • Agent statistics tracking
  • examples/demo_conversation.py - LLM agent conversation (synchronous)

    python examples/demo_conversation.py
    
  • examples/demo_llm_mesh.py โญ NEW - LLM agents in mesh network

    python examples/demo_llm_mesh.py
    

    Demonstrates:

    • Multiple LlmAgents working together in PyLocalMesh
    • Specialized agents (coordinator, research, code assistant)
    • LlmMeshAgent wrapper pattern for mesh compatibility
    • Using Ollama Ministral-3:8b model
    • Agent-to-agent LLM communication

Async Examples

  • examples/demo_async_llm.py โญ NEW - Concurrent LLM operations (recommended)

    python examples/demo_async_llm.py
    

    Demonstrates:

    • Concurrent queries with asyncio.gather()
    • Streaming responses with asyncio.as_completed()
    • Batch processing with concurrency control
    • Error handling in async contexts
  • examples/demo_async_agent.py โœจ NEW - Async message handlers and actions

    python examples/demo_async_agent.py
    

    Demonstrates:

    • Async on_message() handlers
    • Async action execution
    • Thread-local event loop handling

Metrics Examples

  • examples/metrics_quickstart.py โšก NEW - Quick start guide for metrics

    python examples/metrics_quickstart.py
    

    Demonstrates:

    • Basic metrics collection with get_metrics()
    • Retrieving and displaying metrics snapshots
  • examples/metrics_demo.py ๐Ÿ“Š NEW - Comprehensive metrics demo

    python examples/metrics_demo.py
    

    Demonstrates:

    • Message throughput and latency tracking
    • Memory cache hit rate monitoring
    • Error tracking and reporting
    • Continuous monitoring patterns

See examples/README_METRICS.md for complete metrics documentation.

Test Files

All test files are located in the tests/ directory:

  • tests/test_actions.py - Action system tests
  • tests/test_agent_messages.py - Agent messaging tests
  • tests/test_async_agent.py - Async functionality tests
  • tests/test_advanced_features.py - Advanced features
  • tests/test_bindings.py - Basic bindings tests
  • tests/test_decorator.py - Action decorator tests
  • tests/test_llm_agent.py - LLM agent tests
  • tests/test_mesh.py - Mesh operations tests
  • tests/test_ollama_simple.py - Ollama connectivity tests
  • tests/test_response.py - Response handling tests

API Reference

Core Classes

Agent

Base class for creating custom agents.

class MyAgent(Agent):
    def on_message(self, message: str, context=None) -> str:
        """Handle incoming messages (synchronous)"""
        return "response"

    @Agent.action(name="my_action")
    def custom_action(self, param: str) -> str:
        """Custom action callable by other agents"""
        return f"Processed: {param}"

Methods:

  • name() -> str - Get agent name
  • send_message(target: str, message: str) - Send message to another agent
  • on_message(message: str, context=None) - Override to handle messages

Decorators:

  • @Agent.action(name="action_name") - Register a custom action

LlmAgent

LLM-powered agent with fluent builder API.

agent = LlmAgent("name", "ollama::model_name")
agent.with_system_prompt("...")
agent.with_temperature(0.7)
agent.with_max_tokens(100)
agent.build()

Builder Methods:

  • with_system_prompt(prompt: str) - Set system prompt
  • with_temperature(temp: float) - Set temperature (0.0-1.0)
  • with_max_tokens(max: int) - Set max tokens
  • build() - Finalize configuration

Message Methods:

  • send_message(message: str) -> str - Synchronous LLM call
  • send_message_async(message: str) -> Awaitable[str] - Async LLM call โœ…

PyLocalMesh

Local in-memory mesh for agent communication.

mesh = PyLocalMesh("mesh_name")
mesh.add_agent(agent)
mesh.send_to("agent_name", "message")

Methods:

  • add_agent(agent: Agent) - Register an agent
  • send_to(target: str, payload: str) - Send message to agent

PyAction

Custom action definition with schema generation.

from ceylon import PyAction

action = PyAction(
    name="my_action",
    description="Action description",
    schema='{"type": "object", ...}'
)

PyToolInvoker

Execute registered actions.

invoker = agent.tool_invoker
result = invoker.invoke("action_name", '{"param": "value"}')

Async Support

โœ… Fully Supported Async Features

1. send_message_async() on LlmAgent

  • Fully functional and production-ready
  • Supports concurrent execution with asyncio
  • Proper error propagation
async def example():
    agent = LlmAgent("agent", "ollama::model")
    agent.build()

    # Concurrent queries
    tasks = [agent.send_message_async(q) for q in queries]
    results = await asyncio.gather(*tasks)

2. Async on_message() handlers โœจ NEW

  • Now fully supported with thread-local event loops
  • Can use async/await in custom agent message handlers
  • Supports async actions as well
class MyAgent(Agent):
    async def on_message(self, message, context=None):
        await asyncio.sleep(0.1)  # Async operations work!
        return f"Processed: {message}"

For detailed async examples, see ASYNC_EXAMPLES.md and ASYNC_STATUS.md.

Documentation

Requirements

  • Python 3.8+
  • Rust toolchain (for building from source)
  • Ollama (for LLM examples)

Installing Ollama

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Start Ollama
ollama serve

# Pull a model
ollama pull gemma3:latest

Development

Building from Source

cd bindings/python
cargo build --release
pip install -e .

Running Tests

cd bindings/python
python -m pytest tests/

Or run individual tests:

python tests/test_actions.py
python tests/test_agent_messages.py
python tests/test_llm_agent.py

Architecture

Ceylon uses a mesh architecture where agents communicate through a unified mesh abstraction:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚          Application Code           โ”‚
โ”‚         (Python/Rust)               โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
               โ”‚
               โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚         Agent Mesh (Rust)           โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”      โ”‚
โ”‚  โ”‚Agent1โ”‚  โ”‚Agent2โ”‚  โ”‚Agent3โ”‚      โ”‚
โ”‚  โ””โ”€โ”€โ”ฌโ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”ฌโ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”ฌโ”€โ”€โ”€โ”˜      โ”‚
โ”‚     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜          โ”‚
โ”‚      Message Routing & Delivery    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
               โ”‚
               โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Local (In-Memory) or Distributed  โ”‚
โ”‚      (Network) Communication        โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Key Concepts:

  • Agents: Autonomous entities that process messages and execute actions
  • Mesh: Communication layer that routes messages between agents
  • Actions: Callable functions/tools that agents can invoke
  • Messages: Data exchanged between agents

Contributing

Contributions are welcome! Please:

  1. Check existing issues or create a new one
  2. Fork the repository
  3. Create a feature branch
  4. Make your changes with tests
  5. Submit a pull request

License

See the main Ceylon repository for license information.

Support

Roadmap

  • Full async/await support for message handlers
  • Additional LLM provider integrations
  • Distributed mesh implementation
  • Agent lifecycle hooks
  • Advanced debugging tools
  • Performance monitoring

Status: Alpha - API may change

For more information about Ceylon, visit the main repository.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ceylonai_next-0.2.8.tar.gz (319.8 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ceylonai_next-0.2.8-cp39-abi3-win_amd64.whl (5.2 MB view details)

Uploaded CPython 3.9+Windows x86-64

ceylonai_next-0.2.8-cp39-abi3-manylinux_2_34_x86_64.whl (7.5 MB view details)

Uploaded CPython 3.9+manylinux: glibc 2.34+ x86-64

ceylonai_next-0.2.8-cp39-abi3-macosx_11_0_arm64.whl (4.7 MB view details)

Uploaded CPython 3.9+macOS 11.0+ ARM64

ceylonai_next-0.2.8-cp39-abi3-macosx_10_12_x86_64.whl (4.8 MB view details)

Uploaded CPython 3.9+macOS 10.12+ x86-64

File details

Details for the file ceylonai_next-0.2.8.tar.gz.

File metadata

  • Download URL: ceylonai_next-0.2.8.tar.gz
  • Upload date:
  • Size: 319.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ceylonai_next-0.2.8.tar.gz
Algorithm Hash digest
SHA256 735b101137adc4a3cf9bb04ee9fccaafede477293cb7b9ce9f869a3d76ba1040
MD5 ef63bc73c8ba96bc40d440b1fb3f57b9
BLAKE2b-256 76dae875d33b3dbb2a93af38d959cbeb62b650d636b946547963f72a671284bd

See more details on using hashes here.

Provenance

The following attestation bundles were made for ceylonai_next-0.2.8.tar.gz:

Publisher: pypi-publish.yml on ceylonai/next-processor

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ceylonai_next-0.2.8-cp39-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for ceylonai_next-0.2.8-cp39-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 f4c3ab2ddb3862f9359d048069a80bc1b3d35ca1fe7a334695509ba68c76c520
MD5 1c0a8a19ab29f9bbcb36bcac5dffd42d
BLAKE2b-256 0c1a1bbf11a6ffc9afbbd13e4c2477fb16f767bdbc007eacbd4075ccc9c608c8

See more details on using hashes here.

Provenance

The following attestation bundles were made for ceylonai_next-0.2.8-cp39-abi3-win_amd64.whl:

Publisher: pypi-publish.yml on ceylonai/next-processor

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ceylonai_next-0.2.8-cp39-abi3-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for ceylonai_next-0.2.8-cp39-abi3-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 24e442da26ac82d7a7930307ebf4fbe2ea536d9d08cb8d3094ccc027904fd583
MD5 e36cd79f631a0a3a234d5f6378050577
BLAKE2b-256 e2604788393cab988769b08d5b7f92b0ca4043b992696bb4e2575d2e789417a2

See more details on using hashes here.

Provenance

The following attestation bundles were made for ceylonai_next-0.2.8-cp39-abi3-manylinux_2_34_x86_64.whl:

Publisher: pypi-publish.yml on ceylonai/next-processor

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ceylonai_next-0.2.8-cp39-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ceylonai_next-0.2.8-cp39-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 7fb4f1d63968f2628555e4b1d053072822c58c8442de88d2714854935dbd6229
MD5 022d8b3866e6193265dfaf6ae3daa02f
BLAKE2b-256 443ae6a147d069f86a0a7ea916aaafb40bf3d77b705f7314edbbe5b14f6d8db0

See more details on using hashes here.

Provenance

The following attestation bundles were made for ceylonai_next-0.2.8-cp39-abi3-macosx_11_0_arm64.whl:

Publisher: pypi-publish.yml on ceylonai/next-processor

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ceylonai_next-0.2.8-cp39-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for ceylonai_next-0.2.8-cp39-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 f747a3e704a103ac427e3768a1e0e7daecfa16a930f8824beed919036fbdd184
MD5 1de1b93aeb050676c771abe944c9bbfb
BLAKE2b-256 a49484b01ee89d5dbf787a357867e7359960fb5871ab56f521900d76ef7dcc07

See more details on using hashes here.

Provenance

The following attestation bundles were made for ceylonai_next-0.2.8-cp39-abi3-macosx_10_12_x86_64.whl:

Publisher: pypi-publish.yml on ceylonai/next-processor

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page