Skip to main content

High-performance DAG execution engine with true parallel execution, built in Rust with Python bindings

Project description

graph-sp Python Bindings

Python bindings for the graph-sp DAG execution engine.

For complete documentation and installation instructions, visit the PyPI package page or the main repository.

Installation

pip install graph-sp

Quick Start

import graph_sp

# Create a graph
graph = graph_sp.Graph()

# Add nodes
graph.add(
    "source", "Data Source",
    [],  # no inputs
    [graph_sp.Port("output", "Numbers")],
    lambda inputs: {"output": [1, 2, 3, 4, 5]}
)

graph.add(
    "doubler", "Multiply by 2",
    [graph_sp.Port("input", "Input")],
    [graph_sp.Port("output", "Output")],
    lambda inputs: {"output": [x * 2 for x in inputs["input"]]}
)

# Connect and execute
graph.add_edge("source", "output", "doubler", "input")
executor = graph_sp.Executor()
result = executor.execute(graph)

print(result.get_output("doubler", "output"))  # [2, 4, 6, 8, 10]

Examples

This directory contains complete Python examples:

  • simple_pipeline.py: Basic 3-node pipeline with graph analysis and Mermaid diagrams
  • complex_objects.py: Demonstrates nested objects, JSON, and lists
  • parallel_execution.py: Shows parallel execution with 3 independent branches
  • implicit_edges.py: Demonstrates auto_connect() with parallel branches and multi-line labels

Running Examples

# Simple pipeline
python simple_pipeline.py

# Complex data structures
python complex_objects.py

# Parallel execution (shows 44% speedup)
python parallel_execution.py

# Implicit edge mapping
python implicit_edges.py

Features

  • True Parallel Execution: Independent nodes run concurrently (44% faster)
  • 🔌 Port-based Architecture: Type-safe data flow between nodes
  • 🔗 Implicit Edge Mapping: Auto-connect nodes by matching port names
  • 📊 Rich Data Types: Primitives, lists, nested dicts, JSON, binary data
  • 🔍 Graph Analysis: Depth, width, sources, sinks, and optimization suggestions
  • 🎨 Rich Mermaid Diagrams: Color-coded nodes, parallel group detection, multi-line labels
  • Cycle Detection: Built-in DAG validation

API Overview

Creating Graphs

import graph_sp

# Create a new graph
graph = graph_sp.Graph()

# Add a node with a Python function
graph.add(
    "node_id",           # Unique identifier
    "Node Name",         # Display name
    [                    # Input ports
        graph_sp.Port("input1", "First Input"),
        graph_sp.Port("input2", "Second Input")
    ],
    [                    # Output ports
        graph_sp.Port("output", "Result")
    ],
    lambda inputs: {     # Node function
        "output": inputs["input1"] + inputs["input2"]
    }
)

# Connect nodes
graph.add_edge("source_node", "output_port", "target_node", "input_port")

# OR use implicit edge mapping (auto-connect by port names)
edges_created = graph.auto_connect()  # No explicit add_edge() needed!

# Validate graph (checks for cycles)
graph.validate()

Implicit Edge Mapping (No add_edge() Needed!)

# Build graphs by matching port names automatically
graph = graph_sp.Graph()

# Add nodes with matching port names
graph.add("source", "Data Source", [],
    [graph_sp.Port("data", "Data")], source_fn)

graph.add("processor", "Processor",
    [graph_sp.Port("data", "Input")],  # Matches "data" output!
    [graph_sp.Port("result", "Result")], processor_fn)

graph.add("sink", "Sink",
    [graph_sp.Port("result", "Input")],  # Matches "result" output!
    [], sink_fn)

# Auto-connect based on port name matching
edges_created = graph.auto_connect()
print(f"✓ Created {edges_created} edges automatically!")

# Generated Mermaid diagram shows all connections:
# source -->|"data→data"| processor
# processor -->|"result→result"| sink

Executing Graphs

# Create executor
executor = graph_sp.Executor()

# Execute graph (automatically parallelizes independent nodes)
result = executor.execute(graph)

# Get outputs
value = result.get_output("node_id", "port_name")

Graph Analysis

# Analyze structure
analysis = graph.analyze()
print(f"Nodes: {analysis.node_count}")
print(f"Edges: {analysis.edge_count}")
print(f"Depth: {analysis.depth}")
print(f"Width: {analysis.width}")  # Parallelization potential
print(f"Sources: {analysis.source_count}")
print(f"Sinks: {analysis.sink_count}")

# Get text visualization
structure = graph.visualize()
print(structure)

# Generate Mermaid diagram
mermaid = graph.to_mermaid()
print(mermaid)

Mermaid Visualization with Parallel Groups

Multi-line labels and parallel execution groups are automatically detected:

# Example with parallel branches and multi-line labels
graph = graph_sp.Graph()

graph.add_node("source", "Value Source", [],
    [graph_sp.Port("value", "Value")], source_fn)

# Multi-line labels using \n
graph.add_node("branch_a", "Branch A\\n(×2)",
    [graph_sp.Port("value", "Input")],
    [graph_sp.Port("out_a", "Output")], branch_a_fn)

graph.add_node("branch_b", "Branch B\\n(+50)",
    [graph_sp.Port("value", "Input")],
    [graph_sp.Port("out_b", "Output")], branch_b_fn)

graph.add_node("merger", "Merger",
    [graph_sp.Port("out_a", "A"), graph_sp.Port("out_b", "B")],
    [], merger_fn)

graph.auto_connect()
mermaid = graph.to_mermaid()

Generated output:

graph TD
    source["Value Source"]
    style source fill:#e1f5ff,stroke:#01579b,stroke-width:2px
    branch_a["Branch A<br/>(×2)"]
    style branch_a fill:#fff3e0,stroke:#e65100,stroke-width:2px
    branch_b["Branch B<br/>(+50)"]
    style branch_b fill:#fff3e0,stroke:#e65100,stroke-width:2px
    merger["Merger"]
    style merger fill:#f3e5f5,stroke:#4a148c,stroke-width:2px

    %% Parallel Execution Groups Detected
    %% Group 1: 2 nodes executing in parallel

    subgraph parallel_group_1["⚡ Parallel Execution Group 1"]
        direction LR
        branch_b
        branch_a
    end
    style parallel_group_1 fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,stroke-dasharray: 5 5

    source -->|"value→value"| branch_a
    source -->|"value→value"| branch_b
    branch_a -->|"out_a→out_a"| merger
    branch_b -->|"out_b→out_b"| merger

Notice:

  • \n in node names becomes <br/> for proper line breaks
  • Parallel branches are grouped in a dashed green subgraph
  • Color-coded nodes: Blue (source), Orange (processing), Purple (sink)
  • All edges are properly connected (no disconnected nodes)

Data Types

All Python types are automatically converted:

Python Type graph-sp Type
int Int
float Float
str String
bool Bool
None None
list List
dict Map
JSON-serializable Json
bytes Bytes

Complex Data Structures

# Nested objects work seamlessly
user = {
    "name": "Alice",
    "age": 30,
    "address": {
        "city": "NYC",
        "zip": "10001"
    },
    "hobbies": ["reading", "coding", "hiking"]
}

# Lists of any type
numbers = [1, 2, 3, 4, 5]
mixed = [1, "hello", 3.14, True, None]

# JSON structures
product = {
    "id": "laptop-001",
    "specs": {
        "cpu": "Intel i7",
        "ram": "16GB"
    },
    "available": True,
    "price": 999.99
}

Parallel Execution

The executor automatically identifies and parallelizes independent branches:

# Fan-out pattern: 3 branches run in parallel
#
#         source
#        /  |  \
#     slow fast medium    <- Execute concurrently!
#        \  |  /
#        merger
#
# Sequential time: 900ms (500 + 100 + 300)
# Parallel time: 500ms (max branch time)
# Speedup: 44% faster!

graph = graph_sp.Graph()

graph.add_node("source", "Source", [], 
               [graph_sp.Port("value", "Value")],
               lambda _: {"value": 100})

# These 3 nodes will execute in parallel
graph.add_node("slow", "Slow Branch",
               [graph_sp.Port("input", "Input")],
               [graph_sp.Port("output", "Output")],
               lambda inputs: slow_operation(inputs["input"]))

graph.add_node("fast", "Fast Branch",
               [graph_sp.Port("input", "Input")],
               [graph_sp.Port("output", "Output")],
               lambda inputs: fast_operation(inputs["input"]))

graph.add_node("medium", "Medium Branch",
               [graph_sp.Port("input", "Input")],
               [graph_sp.Port("output", "Output")],
               lambda inputs: medium_operation(inputs["input"]))

# Connect all branches to source and merger
for branch in ["slow", "fast", "medium"]:
    graph.add_edge("source", "value", branch, "input")

Building from Source

If you want to build from source instead of using PyPI:

# Clone repository
git clone https://github.com/briday1/graph-sp.git
cd graph-sp

# Install maturin
pip install maturin

# Build and install
maturin develop --release --features python

Documentation

  • Full Documentation: https://github.com/briday1/graph-sp
  • Port Data Types: See docs/PORT_DATA_TYPES.md in the repository
  • Expected Output: See EXPECTED_OUTPUT.md in this directory

Performance

Measured with the parallel_execution.py example:

  • Sequential execution: ~900ms
  • Parallel execution: ~502ms
  • Speedup: 44% faster

The executor uses Rust's tokio runtime for true concurrent execution while properly managing Python's GIL.

License

MIT License

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pygraph_sp-2026.1.tar.gz (70.8 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

pygraph_sp-2026.1-cp312-cp312-win_amd64.whl (388.9 kB view details)

Uploaded CPython 3.12Windows x86-64

pygraph_sp-2026.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (606.3 kB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

pygraph_sp-2026.1-cp312-cp312-macosx_11_0_arm64.whl (523.3 kB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

File details

Details for the file pygraph_sp-2026.1.tar.gz.

File metadata

  • Download URL: pygraph_sp-2026.1.tar.gz
  • Upload date:
  • Size: 70.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pygraph_sp-2026.1.tar.gz
Algorithm Hash digest
SHA256 8a78fea2f773e6ef212176307fa0786a9c7e4ab775369a50aad411fd8ae39d2f
MD5 fed210b08bd8121ff7fc0ebb23e35b22
BLAKE2b-256 ab63ea5a3a0abea16107d1adffdad0ab6f834ff7e9abfc40082fce4e2665079c

See more details on using hashes here.

File details

Details for the file pygraph_sp-2026.1-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for pygraph_sp-2026.1-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 276c758f0e9aa460d33198e7419ae0c1a69f8542bff70d165e05b8d361171782
MD5 5fecc9f21847f6ec1edb2454393d63db
BLAKE2b-256 d784fd2de8d96b6b806d32e62054eeab7435da0d0e516e8ea8a06d75f08aaaab

See more details on using hashes here.

File details

Details for the file pygraph_sp-2026.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for pygraph_sp-2026.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b527a86057662bc92011c5fd79423fa38cab37cbd45f90a7c5da117af1fe8d8a
MD5 4af9bd00d6f3962412602bc17be5adc3
BLAKE2b-256 15348de5cedd2cd1ec8881da14203775ca38b2abda5da24198f4ae752b4beb94

See more details on using hashes here.

File details

Details for the file pygraph_sp-2026.1-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for pygraph_sp-2026.1-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 e0d5921a7a2de3e64fb3cb41e43544d091e502b7f3e765372d2b17d55f2253b5
MD5 3ed7dec3a6630974987b5e04b80a1d23
BLAKE2b-256 4cfbd9a063180ffccd9ec5091793f0ddaf92833afd7877e910740466380f785c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page