Skip to main content

llama.cpp server binary built from source

Project description

llama-cpp-bin

Pre-built llama.cpp server binaries as a py package. Install a wheel for your platform and run it.

Install

Pre-built wheels (recommended)

pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cpu llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cu124 llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cu131 llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/rocm llama-cpp-bin
pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/vulkan llama-cpp-bin

Pin to a specific version:

pip install --index-url https://vladlearns.github.io/llama-cpp-bin/whl/cu124 llama-cpp-bin==9095.0.0

PyPI (builds from source)

If no pre-built wheel matches your platform, pip falls back to building from the sdist on PyPI:

pip install llama-cpp-bin

You will need CMake, a c++ compiler, and the llama.cpp source submodule.

Dev

git clone --recurse-submodules https://github.com/vladlearns/llama-cpp-bin
cd llama-cpp-bin
CMAKE_ARGS="-DGGML_CUDA=ON" pip install -v .

Run

CLI:

llama-cpp-server -m your-model.gguf --port 8080

Python:

from llama_cpp_bin import run_server
proc = run_server("your-model.gguf", port=8080)
proc.wait()

Or get the binary path and run it yourself:

import llama_cpp_bin
import subprocess
binary = llama_cpp_bin.get_binary_path()
subprocess.Popen([binary, "--model", "your-model.gguf"])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_cpp_bin-9097.0.0.tar.gz (4.1 MB view details)

Uploaded Source

File details

Details for the file llama_cpp_bin-9097.0.0.tar.gz.

File metadata

  • Download URL: llama_cpp_bin-9097.0.0.tar.gz
  • Upload date:
  • Size: 4.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llama_cpp_bin-9097.0.0.tar.gz
Algorithm Hash digest
SHA256 df69a4d3668f3b9d7e34e53282068c1cc71577cdb3227cf689361926b7aa2c5a
MD5 4468c67e68e7f503007c31ade97dbc69
BLAKE2b-256 a2f7f76138304aade500995711e591797c86da7f4a8fc5f7b162df2c387be9cd

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_cpp_bin-9097.0.0.tar.gz:

Publisher: build-everything.yml on vladlearns/llama-cpp-bin

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page