Skip to main content

Enable IPFS model loading for llama-cpp-python

Project description

Llama_IPFS

Load models directly from IPFS for llama-cpp-python.

Features

  • 🌐 Direct integration with local IPFS nodes (preferred method)
  • 🔄 Automatic fallback to IPFS gateways when local node isn't available
  • 🔍 Simple URI format: ipfs://CID for easy model sharing
  • ⚡ Zero configuration required - works automatically once installed
  • 🧩 Compatible with any version of llama-cpp-python

Installation

# Note: PyPI package names use hyphens
pip install llama-ipfs
llama-ipfs activate

Once installed and activated, the llama_ipfs integration will be loaded automatically whenever you use Python.

Usage

After installation, use llama-cpp-python with IPFS model URIs:

from llama_cpp import Llama

# Load a model directly from IPFS
model = Llama.from_pretrained(
    repo_id="ipfs://bafybeie7quk74kmqg34nl2ewdwmsrlvvt6heayien364gtu2x6g2qpznhq",
    filename="ggml-model-Q4_K_M.gguf"
)

# Use the model normally
response = model.create_completion(
    "Once upon a time",
    max_tokens=128
)

Google Colab Usage

In Google Colab, you need to manually apply the patch after importing:

# Import and manually apply patch
import llama_ipfs
llama_ipfs.activate()

# Verify patch is active
print(f"IPFS patch active: {llama_ipfs.status()}")

Open In Colab

IPFS Node Connectivity

The llama_ipfs package prioritizes connectivity in the following order:

  1. Local IPFS Node (Recommended): If you have an IPFS daemon running locally (ipfs daemon), the package will automatically detect and use it. This method:

    • Is much faster for repeated downloads
    • More reliably loads complex model directories
    • Contributes to the IPFS network by providing content to others
  2. IPFS Gateway (Fallback): If a local node isn't available, the package will fall back to public gateways. This method:

    • Works without installing IPFS
    • May be less reliable for complex model directories
    • Downloads can be interrupted more easily

Command Line Interface

# Note: CLI commands use hyphens
# Activate the auto-loading
llama-ipfs activate

# Check if the integration is active
llama-ipfs status

# Test the integration
llama-ipfs test

# Deactivate the integration
llama-ipfs deactivate

Dependencies

  • Python 3.8+
  • llama-cpp-python

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_ipfs-0.1.1.tar.gz (14.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_ipfs-0.1.1-py3-none-any.whl (13.5 kB view details)

Uploaded Python 3

File details

Details for the file llama_ipfs-0.1.1.tar.gz.

File metadata

  • Download URL: llama_ipfs-0.1.1.tar.gz
  • Upload date:
  • Size: 14.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for llama_ipfs-0.1.1.tar.gz
Algorithm Hash digest
SHA256 769bac205c51434081209ad18d6f9a7d78becec11cdcfaba9c6ef51b2d6ad909
MD5 e161b5e48e717d21e14302afa4b25c82
BLAKE2b-256 48cd9b556e4dc8dd0319337bf456c84cb35a19501c9c69d7727c6475180e4bb5

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_ipfs-0.1.1.tar.gz:

Publisher: publish.yml on alexbakers/llama_ipfs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llama_ipfs-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: llama_ipfs-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 13.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for llama_ipfs-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8e68683aa41bbcb08e3f685c6d2fc25bbe2cf6d31aff19ca6798ed568fb44a36
MD5 1526f4177a26d888f91d4e39e2c7288b
BLAKE2b-256 8fbf1eabebbcd532538336c4e7b82c6b42b2d9b47fd93eaac569540676baead0

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_ipfs-0.1.1-py3-none-any.whl:

Publisher: publish.yml on alexbakers/llama_ipfs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page