Skip to main content

A hierarchical, modular pipeline system for ML/AI workflows

Project description

hypernodes

[Installation] | [Documentation] | [License]

HyperNodes

Build once, cache intelligently, run anywhere.

HyperNodes is a hierarchical, modular pipeline system with intelligent caching designed for ML/AI development workflows. It treats caching as a first-class citizen, enabling developers to iterate rapidly without re-running expensive computations.

✨ Key Features

🧪 Test with One, Scale to Many

Build and test your pipeline with a single input, then run it over thousands of inputs without changing a line of code. Keep your code simple, unit-testable, and debuggable while enabling production-scale batch processing.

💾 Intelligent Caching

During development, we run pipelines dozens of times with minor tweaks. HyperNodes automatically caches at node and example granularity and only re-runs what changed.

🪆 Hierarchical Modularity

Functions are nodes. Pipelines are made out of nodes, and Pipelines are nodes themselves. Build complex workflows from simple, reusable pieces.

⚡ Flexible Execution

Run pipelines with different execution strategies: sequential for debugging, async for I/O-bound workloads, or distributed parallel execution with Daft for high-performance data processing.


📚 Documentation

The full documentation is available in the docs/ directory:


🚀 Quick Start

Installation

pip install hypernodes
# or with uv
uv add hypernodes

Basic Example

from hypernodes import Pipeline, node

# Define functions as nodes
@node(output_name="cleaned_text")
def clean_text(passage: str) -> str:
    return passage.strip().lower()

@node(output_name="word_count")
def count_words(cleaned_text: str) -> int:
    return len(cleaned_text.split())

# Build pipeline - dependencies are automatically resolved
pipeline = Pipeline(nodes=[clean_text, count_words])

# Test with single input
result = pipeline.run(inputs={"passage": "Hello World"})
print(result)  # {'cleaned_text': 'hello world', 'word_count': 2}

# Scale to many inputs - each item cached independently
results = pipeline.map(
    inputs={"passage": ["Hello", "World", "Foo", "Bar"]},
    map_over="passage",
)

High-Performance Execution with Daft

from hypernodes import Pipeline
from hypernodes.engines import DaftEngine

# Distributed execution using Daft
# Requires: pip install getdaft
engine = DaftEngine(use_batch_udf=True)
pipeline = Pipeline(nodes=[clean_text, count_words], engine=engine)

# Auto-batches and executes in parallel
# Each item is cached independently
results = pipeline.map(
    inputs={"passage": ["Hello", "World"] * 1000},
    map_over="passage"
)

📄 License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hypernodes-0.4.1.tar.gz (68.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hypernodes-0.4.1-py3-none-any.whl (81.9 kB view details)

Uploaded Python 3

File details

Details for the file hypernodes-0.4.1.tar.gz.

File metadata

  • Download URL: hypernodes-0.4.1.tar.gz
  • Upload date:
  • Size: 68.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.13

File hashes

Hashes for hypernodes-0.4.1.tar.gz
Algorithm Hash digest
SHA256 ff37125e64c0663871718b8d3fb7a08585854d42176ef9aade25614bd99d87ce
MD5 7c87f9b92c7de6850141d495d64b8de0
BLAKE2b-256 6988c2753d11c5ae9d1a3fa634d1934625ce5db8fb195e4d7faa91c282198cc4

See more details on using hashes here.

File details

Details for the file hypernodes-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: hypernodes-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 81.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.13

File hashes

Hashes for hypernodes-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 74212cf20e0cbcefe5cca168acb586427137d4261b4a047b14636546e593b41f
MD5 6390cfc4072103caab381ea0a75307fc
BLAKE2b-256 6a884ea25fc08a2d983c3bc8567cbdd2f49af7cc03897a980c6abdb2eaf8946f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page