Skip to main content

Intelligent branch exploration for LLM-powered applications with conversation analysis and intelligent routing

Project description

chatroutes-autobranch

Controlled branching generation for LLM applications

PyPI version License: MIT Python 3.9+ Open In Colab Code style: black

Modern LLM applications often need to explore multiple reasoning paths (tree-of-thought, beam search, multi-agent systems) while staying usable and affordable. chatroutes-autobranch provides clean, standalone primitives for:

  • ๐Ÿ” Branch Detection โ€“ Identify decision points in text (enumerations, disjunctions, conditionals)
  • ๐ŸŽฏ Beam Search โ€“ Pick the best K candidates by configurable scoring
  • ๐ŸŒˆ Diversity Control โ€“ Ensure variety via novelty pruning (cosine similarity, MMR)
  • ๐Ÿ›‘ Smart Stopping โ€“ Know when to stop via entropy/information-gain metrics
  • ๐Ÿ’ฐ Budget Management โ€“ Keep costs predictable with token/time/node caps
  • ๐Ÿ”Œ Pluggable Design โ€“ Swap any component (scorer, embeddings, stopping criteria)

Key Features:

  • โœ… Deterministic & reproducible (fixed tie-breaking, seeded clustering)
  • โœ… Embedding-agnostic (OpenAI, HuggingFace, or custom)
  • โœ… Production-ready (thread-safe, observable, checkpoint/resume)
  • โœ… Framework-friendly (works with LangChain, LlamaIndex, or raw LLM APIs)
  • โœ… Zero vendor lock-in (MIT License, no cloud dependencies)

๐Ÿš€ Interactive Demos (Try it Now!)

Getting Started Demo (Recommended)

Open In Colab

Perfect for first-time users! Learn the fundamentals in 5 minutes:

  • โœ… Installation and setup
  • โœ… Basic beam search examples
  • โœ… Multi-strategy scoring
  • โœ… Novelty filtering
  • โœ… Complete pipeline with budget control

No setup required - runs entirely in your browser!

Branch Detection Demo (NEW! ๐ŸŽ‰)

Open In Colab

Analyze text for decision points! Interactive branch detection:

  • โœ… Extract branch points from LLM responses
  • โœ… Count possible conversation paths
  • โœ… Pattern-based detection (no LLM needed)
  • โœ… Optional LLM assist for complex cases
  • โœ… Try your own text interactively

Creative Writing Scenario (Advanced)

Open In Colab

See it in action with a real LLM! Complete creative writing assistant:

  • โœ… Full Ollama integration (free, local inference)
  • โœ… Multi-turn branching (tree exploration)
  • โœ… GPU/CPU performance comparison
  • โœ… 4 complete story scenarios

๐Ÿ“š View all notebooks โ†’


Quick Start

Install:

pip install chatroutes-autobranch

Basic Usage:

from chatroutes_autobranch import BranchSelector, Candidate
from chatroutes_autobranch.config import load_config

# Load config (or use dict/env vars)
selector = BranchSelector.from_config(load_config("config.yaml"))

# Define parent and candidate branches
parent = Candidate(id="root", text="Explain photosynthesis simply")
candidates = [
    Candidate(id="c1", text="Start with sunlight absorption"),
    Candidate(id="c2", text="Begin with glucose production"),
    Candidate(id="c3", text="Explain chlorophyll's role"),
]

# Select best branches (applies beam โ†’ novelty โ†’ entropy pipeline)
result = selector.step(parent, candidates)

print(f"Kept: {[c.id for c in result.kept]}")
print(f"Entropy: {result.metrics['entropy']['value']:.2f}")
print(f"Should continue: {result.metrics['entropy']['continue']}")

Config (config.yaml):

beam:
  k: 3  # Keep top 3 by score
  weights: {confidence: 0.4, relevance: 0.3, novelty_parent: 0.2}

novelty:
  method: cosine  # or 'mmr' for Maximal Marginal Relevance
  threshold: 0.85

entropy:
  min_entropy: 0.6  # Stop if diversity drops below 60%

embeddings:
  provider: openai
  model: text-embedding-3-large

๐Ÿ” Branch Detection (NEW!)

Analyze text to identify decision points before generating branches:

from chatroutes_autobranch import BranchExtractor

# Analyze LLM response for branch points
text = """
Backend options:
1. Flask - lightweight
2. FastAPI - modern
3. Django - full-featured

Database: Postgres or MySQL
"""

extractor = BranchExtractor()
branch_points = extractor.extract(text)

print(f"Found {len(branch_points)} decision points")
# Output: Found 2 decision points

print(f"Max paths: {extractor.count_max_leaves(branch_points)}")
# Output: Max paths: 6 (3 backends ร— 2 databases)

Features:

  • โœ… Deterministic pattern matching - No LLM needed (fast, free)
  • โœ… Detects multiple patterns - Enumerations, disjunctions, conditionals
  • โœ… Combinatorial counting - Calculate max possible paths (ฮ  ki)
  • โœ… Optional LLM assist - Fallback for complex/implicit cases
  • โœ… Statistics & analysis - Breakdown by type, complexity metrics

Use Cases:

  • Pre-analyze LLM responses before branching
  • Count conversation path complexity
  • Estimate branching potential from text
  • Extract structured choices from unstructured responses

Try it in Colab โ†’


Why Use This?

Problem: Exploring multiple LLM reasoning paths (e.g., tree-of-thought) quickly becomes:

  • Expensive โ€“ Exponential growth of branches drains API budgets
  • Redundant โ€“ Models generate similar outputs (mode collapse)
  • Uncontrolled โ€“ No clear stopping criteria (when is "enough" exploration?)

Solution: chatroutes-autobranch gives you:

  1. Beam Search to keep only the top-K candidates (quality filtering)
  2. Novelty Pruning to remove similar outputs (diversity enforcement)
  3. Entropy Stopping to detect when you've explored enough (convergence detection)
  4. Budget Limits to cap costs before runaway spending

Result: Controlled, efficient tree exploration with predictable costs.


Use Cases

Scenario Configuration Benefit
Branch Analysis BranchExtractor only Analyze text for decision points, count paths (no generation)
Tree-of-Thought Reasoning K=5, cosine novelty, entropy stopping Explore diverse reasoning paths without explosion
Multi-Agent Debate K=3, MMR novelty (ฮป=0.3) Select diverse agent perspectives, avoid redundancy
Code Generation K=4, high relevance weight Generate varied solutions, prune duplicates
Creative Writing K=8, low novelty threshold High diversity, explore creative space
Factual Q&A K=2, strict budget Focus on accuracy, minimal branching

Architecture

Two-Phase Workflow:

Phase 1: Branch Detection (Optional, Pre-Analysis)
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
Text โ†’ BranchExtractor โ†’ Branch Points โ†’ Count Max Paths
                       โ†’ Statistics
                       โ†’ Decision: Generate or Skip?

Phase 2: Branch Selection (Core Pipeline)
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
Raw Candidates (N)
    โ†“
1. Scoring (composite: confidence + relevance + novelty + intent + reward)
    โ†“
2. Beam Selection (top K by score, deterministic tie-breaking)
    โ†“
3. Novelty Filtering (prune similar via cosine/MMR)
    โ†“
4. Entropy Check (compute diversity, decide if should continue)
    โ†“
5. Result (kept + pruned + metrics)

Pluggable Components:

  • BranchExtractor: Deterministic pattern matching (optional)
  • LLMBranchParser: LLM-based extraction (optional fallback)
  • Scorer: Composite (built-in) or custom
  • EmbeddingProvider: OpenAI, HuggingFace, or custom
  • NoveltyFilter: Cosine threshold or MMR
  • EntropyStopper: Shannon entropy or custom
  • BudgetManager: Token/time/node caps

All components use Protocol (duck typing) โ€“ swap any part without touching others.


Installation

Minimal:

pip install chatroutes-autobranch

With extras:

# FastAPI service (for TypeScript/other languages)
pip install chatroutes-autobranch[service]

# HuggingFace local embeddings
pip install chatroutes-autobranch[hf]

# FAISS for large-scale similarity (1000+ candidates)
pip install chatroutes-autobranch[faiss]

# All features
pip install chatroutes-autobranch[all]

Documentation

๐Ÿ“˜ Full Specification โ€“ Complete API reference, algorithms, examples, and troubleshooting

Key Sections:


Examples

Multi-Generation Tree Exploration

from collections import deque
import time

# User provides LLM generation function
def my_llm_generate(parent: Candidate, n: int) -> list[Candidate]:
    # Your LLM call here (OpenAI, Anthropic, etc.)
    responses = llm_api.generate(parent.text, n=n)
    return [Candidate(id=f"{parent.id}_{i}", text=r) for i, r in enumerate(responses)]

# Setup
selector = BranchSelector.from_config(load_config("config.yaml"))
budget_manager = BudgetManager(Budget(max_nodes=50, max_tokens=20000))

# Tree exploration
queue = deque([root_candidate])
while queue:
    current = queue.popleft()
    children = my_llm_generate(current, n=5)

    # Check budget before selection
    if not budget_manager.admit(n_new=5, est_tokens=1000, est_ms=2000):
        break

    # Select best branches
    result = selector.step(current, children)
    budget_manager.update(actual_tokens=1200, actual_ms=1800)

    # Continue with kept candidates
    queue.extend(result.kept)

    # Stop if entropy is low (converged)
    if not result.metrics["entropy"]["continue"]:
        break

Custom Scorer

from chatroutes_autobranch import Scorer, Candidate, ScoredCandidate

class DomainScorer(Scorer):
    def score(self, parent: Candidate, candidates: list[Candidate]) -> list[ScoredCandidate]:
        scored = []
        for c in candidates:
            # Custom logic: prefer longer, detailed responses
            detail_score = min(len(c.text) / 1000, 1.0)
            scored.append(ScoredCandidate(id=c.id, text=c.text, score=detail_score))
        return scored

# Use in pipeline
beam = BeamSelector(k=3, scorer=DomainScorer())
selector = BranchSelector(beam, novelty, entropy, budget)

FastAPI Service (for TypeScript/other languages)

# server.py
from fastapi import FastAPI
from chatroutes_autobranch import BranchSelector
from chatroutes_autobranch.config import load_config_from_file

app = FastAPI()
_config = load_config_from_file("config.yaml")

@app.post("/select")
async def select(parent: dict, candidates: list[dict]):
    # Create fresh selector per request (thread-safe)
    selector = BranchSelector.from_config(_config)
    result = selector.step(
        Candidate(**parent),
        [Candidate(**c) for c in candidates]
    )
    return {
        "kept": [{"id": c.id, "score": c.score} for c in result.kept],
        "metrics": result.metrics
    }

# Run: uvicorn server:app

TypeScript client:

const response = await fetch('http://localhost:8000/select', {
  method: 'POST',
  body: JSON.stringify({ parent, candidates })
});
const { kept, metrics } = await response.json();

Features

Beam Search

  • Top-K selection by composite scoring
  • Deterministic tie-breaking (lexicographic ID ordering)
  • Configurable weights: confidence, relevance, novelty, intent alignment, historical reward

Novelty Pruning

  • Cosine similarity: Remove candidates above threshold (e.g., 0.85)
  • MMR (Maximal Marginal Relevance): Balance relevance vs diversity with ฮป parameter
  • Preserves score ordering (best candidates kept first)

Entropy-Based Stopping

  • Shannon entropy on K-means clusters of embeddings
  • Delta-entropy tracking (stop if change < epsilon)
  • Handles edge cases (0, 1, 2 candidates)
  • Normalized to [0,1] scale

Budget Management

  • Caps: max_nodes, max_tokens, max_ms
  • Modes: strict (raise on exceeded) or soft (return False, allow fallback)
  • Pre-admit: Check budget before generation
  • Post-update: Record actual usage for rolling averages

Observability

  • Structured JSON logging (PII-safe by default)
  • OpenTelemetry spans (optional)
  • Rich metrics per step (kept/pruned counts, scores, entropy, budget usage)

Checkpointing

  • Serialize selector state (entropy history, budget snapshot)
  • Resume from checkpoint (pause/resume tree exploration)
  • Schema versioning for backward compatibility

Integrations

LangChain:

from langchain.chains import LLMChain
from chatroutes_autobranch import Candidate, BranchSelector

def generate_and_select(query: str, chain: LLMChain, selector: BranchSelector):
    # Generate N candidates via LangChain
    responses = chain.generate([{"query": query}] * 5)
    candidates = [Candidate(id=f"c{i}", text=r.text) for i, r in enumerate(responses.generations[0])]

    # Select best
    parent = Candidate(id="root", text=query)
    result = selector.step(parent, candidates)
    return result.kept

LlamaIndex: Similar pattern using QueryEngine.query() for generation

Raw APIs (OpenAI, Anthropic): See multi-generation example


Performance

Benchmarks (M1 Max, OpenAI embeddings):

Candidates Beam K Latency (p50) Bottleneck
10 3 240ms Embedding API
50 5 520ms Embedding API
100 10 1.1s Novelty O(Nยฒ)
500 10 4.2s Use FAISS

Optimization tips:

  • Use local embeddings (HuggingFace) for <100ms latency
  • Enable FAISS for 100+ candidates
  • Batch embedding calls (batch_size: 64 in config)
  • Global embedding cache for repeated candidates

Development

Setup:

git clone https://github.com/chatroutes/chatroutes-autobranch
cd chatroutes-autobranch
pip install -e .[dev]

Run tests:

pytest tests/
pytest tests/ -v --cov=chatroutes_autobranch  # With coverage

Type checking:

mypy src/

Formatting:

black src/ tests/
ruff check src/ tests/

Benchmarks:

pytest bench/ --benchmark-only

Contributing

We welcome contributions! Please see our contributing guidelines.

Areas we'd love help with:

  • Additional novelty algorithms (DPP, k-DPP)
  • More embedding providers (Cohere, Voyage AI)
  • Adaptive K scheduling (auto-tune beam width)
  • Tree visualization tools
  • More examples (specific domains)

How to contribute:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes with tests
  4. Run tests and type checking
  5. Submit a Pull Request

Roadmap

  • v1.0.0 โœ… RELEASED (January 2025): Core components, beam search, MMR novelty, cosine filtering, entropy stopping, budget management, full test suite
  • v1.1.0 (Q2 2025): FAISS support for large-scale similarity, adaptive K scheduling
  • v1.2.0 (Q3 2025): Tree visualization tools, FastAPI service for multi-language support
  • v1.3.0 (Q4 2025): Async/await support, cluster-aware pruning
  • v2.0.0 (Q1 2026): gRPC service, TypeScript SDK, breaking API improvements

FAQ

Q: Do I need ChatRoutes cloud to use this? A: No. This library is standalone and has zero cloud dependencies. Use it with any LLM provider.

Q: Can I use this with TypeScript/JavaScript? A: Yes. Run the FastAPI service and call via HTTP. Native TS SDK planned for v2.0.0.

Q: How do I choose beam width K? A: Start with K=3-5. Use budget formula: K โ‰ˆ (budget/tokens_per_branch)^(1/depth). See tuning guide.

Q: What if all candidates get pruned by novelty? A: Lower threshold (e.g., 0.75) or switch to MMR. See troubleshooting.

Q: Is this deterministic? A: Yes, with fixed random seeds and deterministic tie-breaking. See tests.


License

MIT License - see LICENSE file for details.


Acknowledgements

Inspired by research in beam search, diverse selection (MMR, DPP), and LLM orchestration patterns. Built to be practical, swappable, and friendly for contributors.

Special thanks to the open-source community for tools and inspiration: LangChain, LlamaIndex, HuggingFace Transformers, FAISS, and the broader LLM ecosystem.


Links


Built with โค๏ธ by the ChatRoutes team. Open to the community.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatroutes_autobranch-1.3.0.tar.gz (59.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chatroutes_autobranch-1.3.0-py3-none-any.whl (51.5 kB view details)

Uploaded Python 3

File details

Details for the file chatroutes_autobranch-1.3.0.tar.gz.

File metadata

  • Download URL: chatroutes_autobranch-1.3.0.tar.gz
  • Upload date:
  • Size: 59.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.0

File hashes

Hashes for chatroutes_autobranch-1.3.0.tar.gz
Algorithm Hash digest
SHA256 0cb3ea8a293e9a4215c361fd63d8cf76461b1a268253b8039466764f1c94ff95
MD5 382e7fdfa261f264a3dc90722cbcb4b4
BLAKE2b-256 4c2e2d9c04990d1000d5951cb90cc712d471635d1a9252d8feb9517c09870a08

See more details on using hashes here.

File details

Details for the file chatroutes_autobranch-1.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for chatroutes_autobranch-1.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d69bc05fb375789f032be3a33c2f523359ea79b9446d8c9ca576ad580d28d15d
MD5 11088d8921a47ca028bb5b2c156326e8
BLAKE2b-256 200d3e18b34b2464b9333aefeec0e2233ff508ed3627570604aa892ce9ca61ef

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page