Skip to main content

CLI wrapper for llama.cpp providing an ollama-like experience

Project description

llama-buddy

A friendly CLI wrapper for llama.cpp

Manage, download, and serve local LLMs with a single command. Think of it as an ollama-like experience built on top of llama-server.

Python 3.10+ License: MIT PyPI


Features

  • Background server — start/stop/restart llama-server as a daemon
  • Multi-model routing — preset-based configuration with automatic model load/unload
  • Interactive downloads — search HuggingFace, pick a quant, download with progress and resume
  • Rich terminal UI — tables, panels, interactive selectors, and live search
  • GGUF inspector — view model metadata, architecture, and sampling parameters
  • Per-model settings — context size, GPU layers, flash attention, and more
  • Idle model unloading — background watchdog automatically unloads models after configurable idle timeout
  • VRAM tracking — automatically parses server logs to show memory usage per model
  • Auto-sync — preset file stays in sync with the llama.cpp cache automatically

Screenshots

Model listingllb models

llb models

Interactive downloadllb download

llb download

llb download quantization

Model infollb info

llb info

Installation

pipx install llama-buddy

Or with uv:

uv tool install llama-buddy

This installs the llb command into an isolated environment and adds it to your PATH.

Prerequisites

  • Python 3.10+
  • llama.cpp installed and llama-server on your PATH

Quick start

# Download a model (interactive search)
llb download

# Or specify directly
llb download mistralai/Ministral-3-3B-Instruct-2512-GGUF:Q4_K_M

# Start the server
llb start

# List all models
llb models

# Chat with a model (uses llama-cli)
llb chat

# Inspect model metadata
llb info

# Configure settings (interactive TUI)
llb settings

# Open the web UI in your browser
llb open

# Stop the server
llb stop

Commands

Command Description
llb start Start llama-server in the background. Extra args are forwarded.
llb stop Stop the running server.
llb restart Restart the server.
llb status Show whether the server is running.
llb models List all models with status, size, VRAM usage, and grouping. Supports --sort size.
llb download [model] Download a model. Interactive HF search when no model given.
llb remove [model] Remove a model with confirmation dialog. --keep-files to preserve GGUFs.
llb info [model] Show GGUF metadata. Interactive selector when no model given.
llb settings Interactive editor for global and per-model settings.
llb chat [model] Interactive chat via llama-cli. Model selector when no model given.
llb open Open the llama-server web UI in your browser.
llb logs Tail the server log file.

Configuration

Config files live in ~/.config/llama/:

File Purpose
models.ini Model preset file — sections are HF repo IDs, auto-synced with cache
settings.json Global server settings (port, context size, GPU layers, etc.)
vram.json Cached per-model VRAM usage (parsed from server logs)
server.pid PID of the running server
server.log Server stdout/stderr

Per-model settings

Run llb settings and select Model Settings to configure per-model overrides:

  • Context size, GPU layers, flash attention
  • Custom aliases
  • Any llama-server parameter

Development

# Clone and install
git clone https://github.com/thilomichael/llama-buddy.git
cd llama-buddy
uv sync

# Run
uv run llb <command>

# Test
uv run pytest

# Lint
uv run ruff check src/ tests/

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_buddy-0.1.6.tar.gz (31.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_buddy-0.1.6-py3-none-any.whl (39.0 kB view details)

Uploaded Python 3

File details

Details for the file llama_buddy-0.1.6.tar.gz.

File metadata

  • Download URL: llama_buddy-0.1.6.tar.gz
  • Upload date:
  • Size: 31.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for llama_buddy-0.1.6.tar.gz
Algorithm Hash digest
SHA256 41f0a68bef4fee2da6ca522f6cbc7f109b07b82905f4500cc757074d4cadc5a3
MD5 bc0913ca58f75e87e49bce702e8bf095
BLAKE2b-256 e79a3b573d1795c577ee60dd08ffa0bbac09beaf0c3eeeef67d335c234401701

See more details on using hashes here.

File details

Details for the file llama_buddy-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: llama_buddy-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 39.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for llama_buddy-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 c663bf165768f411e859a1f190419d0b2161e772ce04a2f997ff8164ebf4a068
MD5 8d5efc891f30beefb4b223a8602f429d
BLAKE2b-256 d25f12811dc92a0647b704c5b6e3abdcbf6237bdc2c3ea98dc07d4a886635f82

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page