Skip to main content

llama.cpp server binary built from source

Project description

llama-cpp-bin

Pre-built llama.cpp server binaries as a py package. Install a wheel for your platform and run it.

Install

Pre-built wheels (recommended)

pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cpu llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cu124 llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cu131 llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/rocm llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/vulkan llama-cpp-bin

PyPI (builds from source)

If no pre-built wheel matches your platform, pip falls back to building from the sdist on PyPI:

pip install llama-cpp-bin

You will need CMake, a c++ compiler, and the llama.cpp source submodule.

Dev

git clone --recurse-submodules https://github.com/vladlearns/llama-cpp-bin
cd llama-cpp-bin
CMAKE_ARGS="-DGGML_CUDA=ON" pip install -v .

Run

CLI:

llama-cpp-server -m your-model.gguf --port 8080

Python:

from llama_cpp_bin import run_server
proc = run_server("your-model.gguf", port=8080)
proc.wait()

Or get the binary path and run it yourself:

import llama_cpp_bin
import subprocess
binary = llama_cpp_bin.get_binary_path()
subprocess.Popen([binary, "--model", "your-model.gguf"])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_cpp_bin-9094.0.0.tar.gz (4.1 MB view details)

Uploaded Source

File details

Details for the file llama_cpp_bin-9094.0.0.tar.gz.

File metadata

  • Download URL: llama_cpp_bin-9094.0.0.tar.gz
  • Upload date:
  • Size: 4.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llama_cpp_bin-9094.0.0.tar.gz
Algorithm Hash digest
SHA256 799ed6fd72d75c4d866666f57627fddc3cb156fdd3487e797f6d23e867971138
MD5 746319af38a528963766af1b6df50b86
BLAKE2b-256 7edec0f5fc329d21da8d605d64eaa1bf6fb6fe2d5f59bb72eb0b7a2496ee6f86

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_cpp_bin-9094.0.0.tar.gz:

Publisher: build-everything.yml on vladlearns/llama-cpp-bin

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page