Skip to main content

llama.cpp server binary built from source

Project description

llama-cpp-bin

Pre-built llama.cpp server binaries as a py package. Install a wheel for your platform and run it.

Install

Pre-built wheels (recommended)

pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cpu llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cu124 llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cu131 llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/rocm llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/vulkan llama-cpp-bin

Pin to a specific version:

pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cu124 llama-cpp-bin==9095.0.0

PyPI (builds from source)

If no pre-built wheel matches your platform, pip falls back to building from the sdist on PyPI:

pip install llama-cpp-bin

You will need CMake, a c++ compiler, and the llama.cpp source submodule.

Dev

git clone --recurse-submodules https://github.com/vladlearns/llama-cpp-bin
cd llama-cpp-bin
CMAKE_ARGS="-DGGML_CUDA=ON" pip install -v .

Run

CLI:

llama-cpp-server -m your-model.gguf --port 8080

Python:

from llama_cpp_bin import run_server
proc = run_server("your-model.gguf", port=8080)
proc.wait()

Or get the binary path and run it yourself:

import llama_cpp_bin
import subprocess
binary = llama_cpp_bin.get_binary_path()
subprocess.Popen([binary, "--model", "your-model.gguf"])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_cpp_bin-9101.0.0.tar.gz (4.1 MB view details)

Uploaded Source

File details

Details for the file llama_cpp_bin-9101.0.0.tar.gz.

File metadata

  • Download URL: llama_cpp_bin-9101.0.0.tar.gz
  • Upload date:
  • Size: 4.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llama_cpp_bin-9101.0.0.tar.gz
Algorithm Hash digest
SHA256 afff24e528d7bfa6b19ad3451236ad993aaeaeda51b6a0922591416609e5dc37
MD5 d824ccf7c4bdf5e47f8dca028bee4b52
BLAKE2b-256 c99642f09afcd89c9a2bec5d29e922ef81ee28db1f8fef5ce41a8143dd9260ba

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_cpp_bin-9101.0.0.tar.gz:

Publisher: build-everything.yml on vladlearns/llama-cpp-bin

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page