Skip to main content

Cascade based inference for large language models.

Project description

Cascade Inference

Cascade based inference for large language models.

Installation

pip install cascade-inference

# To use semantic agreement, install the optional dependencies:
pip install cascade-inference[semantic]

Basic Usage

💡 Pro-Tip: It is highly recommended to use Level 1 client models from the same or similar model families (e.g., all Llama-based, all Qwen-based). This improves the reliability of the semantic agreement strategy. If you mix models from different families (like Llama and Gemini), consider lowering the threshold in the agreement strategy to account for stylistic differences.

Using the library is as simple as a standard OpenAI API call.

from openai import OpenAI
import cascade
import os

# Setup your clients
client = OpenAI(
    base_url="https://openrouter.ai/api/v1",
    api_key=os.environ.get("OPENROUTER_API_KEY"),
)

# Call the create function directly
response = cascade.chat.completions.create(
    # Provide the ensemble of fast clients
    level1_clients=[
        (client, "meta-llama/llama-3.1-8b-instruct"),
        (client, "google/gemini-flash-1.5")
    ],
    # Provide the single, powerful client for escalation
    level2_client=(client, "openai/gpt-4o"),
    agreement_strategy="semantic", # or "strict"
    messages=[
        {"role": "user", "content": "What are the key differences between HBM3e and GDDR7 memory?"}
    ]
)

# The response object looks just like a standard OpenAI response
print(response.choices[0].message.content)

Advanced Configuration

For more control, you can pass a dictionary to the agreement_strategy parameter. This allows you to fine-tune the agreement logic.

1. Changing the Semantic Similarity Threshold

You can adjust how strictly the semantic comparison is applied. The threshold is a value between 0 and 1, where 1 is a perfect match. The default is 0.9.

response = cascade.chat.completions.create(
    # ... clients and messages ...
    agreement_strategy={
        "name": "semantic",
        "threshold": 0.95  # Require a 95% similarity match
    },
    # ...
)

2. Using a Different Embedding Model

The default model is sentence-transformers/all-MiniLM-L6-v2, which is fast and lightweight. You can specify any other model compatible with the FastEmbed library.

Some other excellent choices from the supported models list include:

  • nomic-ai/nomic-embed-text-v1.5
  • sentence-transformers/paraphrase-multilingual-mpnet-base-v2: For multilingual use cases.

The library will automatically download and cache the new model on the first run.

response = cascade.chat.completions.create(
    # ... clients and messages ...
    agreement_strategy={
        "name": "semantic",
        "model_name": "BAAI/bge-base-en-v1.5", # A larger, more powerful model
        "threshold": 0.85 # It's good practice to adjust the threshold for a new model
    },
    # ...
)

3. Using a Remote Embedding Model

If local embedding is too slow, you can use the remote_semantic strategy. This feature is optimized for the Hugging Face Inference API and is the recommended way to perform remote comparisons.

Usage: You must provide a Hugging Face API key, which you can get for free from your account settings: huggingface.co/settings/tokens.

The key can be passed directly via the api_key parameter or set as the HUGGING_FACE_HUB_TOKEN environment variable.

The default model is sentence-transformers/all-mpnet-base-v2, but you can easily use other models from the sentence-transformers family on the Hub. We recommend the following models for the remote strategy:

  • Default & High-Quality: sentence-transformers/all-mpnet-base-v2
  • Lightweight & Fast: sentence-transformers/all-MiniLM-L6-v2
  • Multilingual: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
response = cascade.chat.completions.create(
    # ... clients and messages ...
    agreement_strategy={
        "name": "remote_semantic",
        "model_name": "sentence-transformers/paraphrase-multilingual-mpnet-base-v2", # A multilingual model
        "threshold": 0.95,
        "api_key": "hf_YourHuggingFaceToken" # Optional, can also be set via env variable
    },
    # ...
)

You can also point the strategy to a completely different API provider by overriding the api_url, but you may need to fork the RemoteSemanticAgreement class if the provider requires a different payload structure.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cascade_inference-0.1.1.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cascade_inference-0.1.1-py3-none-any.whl (9.1 kB view details)

Uploaded Python 3

File details

Details for the file cascade_inference-0.1.1.tar.gz.

File metadata

  • Download URL: cascade_inference-0.1.1.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for cascade_inference-0.1.1.tar.gz
Algorithm Hash digest
SHA256 f545d99eafea5a3957764ab3ecd571a301bcabd9dfbde091cbdf7f60e10fcd5b
MD5 d819e6611e5eaef39c89dee5579a844a
BLAKE2b-256 8ee5fbeb52214edbf26c758a04be5102b443fabf5dd3fbaec8e185ad7dee6642

See more details on using hashes here.

Provenance

The following attestation bundles were made for cascade_inference-0.1.1.tar.gz:

Publisher: publish.yml on elibutters/CascadeInference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file cascade_inference-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for cascade_inference-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8ca5485e9620e7483e676ff1c1764a3794fd893f2673e2aa0b3e48afc24fa9fd
MD5 6d0a233b48397e90356c6658e90ba7f0
BLAKE2b-256 209d9b1af2113b9a42849d21a9c350bd4ad95fd8ee119c90da3dca43036e74ae

See more details on using hashes here.

Provenance

The following attestation bundles were made for cascade_inference-0.1.1-py3-none-any.whl:

Publisher: publish.yml on elibutters/CascadeInference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page