Skip to main content

Cross-platform LLM lifecycle manager with resource-pressure and gaming-detection autopause

Project description

llm-valet

Cross-platform drop-in utility that manages Ollama (and other LLM providers) lifecycle based on manual control or automatic resource/activity sensing.

Platforms: macOS · Linux · Windows


What It Does

llm-valet watches your machine in real time. When a game launches, or RAM/CPU/GPU pressure spikes, it automatically unloads the LLM model from memory — then quietly reloads it when resources free up. A REST API and web dashboard give you full manual control at any time.

Origin Use Case

A Mac Mini M4 doubles as both a persistent LLM server and a gaming machine. The valet detects when gaming is happening (or resources are scarce) and gracefully unloads the model and optionally the LLM service, then reloads when resources free up.

Game Detection — Steam Background Helpers

llm-valet detects active gaming by checking for processes whose executable path contains steamapps/common. This catches any game launched via Steam, including helper processes that many games (and Steam itself) keep running as background services.

What this means in practice: If you have Steam open, even without actively playing a game, Steam's helper processes may be detected and hold the watchdog in the paused state. This is by design — Steam helpers compete for the same resources as LLM inference. If you want the valet to stay active while Steam is open in the background, close Steam entirely or use manual /resume to override.


Why This Exists

A thorough search of existing tools (April 2026) confirmed this fills a real gap. No existing project combines:

  • Automatic pause/resume based on real-time resource pressure thresholds
  • Gaming activity detection (Steam native process watching)
  • Cross-platform REST API + web dashboard with manual override
  • Provider abstraction (Ollama for v1.0; LM Studio, vLLM, MLX post-v1.0)
Nearest neighbor Why it doesn't overlap
Open WebUI (130k+ stars) Chat UI only — no lifecycle control, no resource management
EnviroLLM Energy/resource benchmarking — monitoring only, not automatic control
OllamaMan / ollama-dashboard Read-only dashboards — no pause/resume, no thresholds
Ollama built-in keep_alive Time-based idle unload only — no resource pressure sensing, no gaming detection

The GitHub issue ollama/ollama#11085 documents community demand for resource-pressure-based unloading that Ollama has not implemented.


Core Concepts

Pause vs. Stop

Action Effect Speed When
Pause / Resume Unloads model from memory; service stays running Fast (seconds) Default — resource pressure or game detected
Stop / Start Full service shutdown via platform service manager Slow (30–90s) Maintenance or zero-memory-footprint

Supported Providers

Provider Status
Ollama ✅ Implemented
LM Studio post-v1.0
vLLM post-v1.0
MLX (Apple Silicon) post-v1.0

Architecture

llm_valet/
├── api.py              # FastAPI — HTTP endpoints + security middleware
├── watchdog.py         # Auto-mode: process watcher + resource signal consumer
├── config.py           # Settings loader (config.yaml or env vars)
├── providers/          # LLM provider abstraction
│   ├── base.py         #   LLMProvider ABC + ProviderStatus
│   └── ollama.py       #   Ollama implementation
└── resources/          # Machine resource monitoring abstraction
    ├── base.py         #   ResourceCollector ABC + ThresholdEngine (pure logic)
    ├── macos.py        #   Apple Silicon: unified memory pressure + Metal GPU
    ├── linux.py        #   psutil + pynvml / ROCm
    └── windows.py      #   psutil + WMI + pynvml

ThresholdEngine is pure logic — no I/O. Takes SystemMetrics + ResourceThresholds, returns (should_pause: bool, reason: str). Fully unit-testable without mocking any OS APIs.


API

Method Path Action
GET /status Provider state + current resource snapshot + watchdog last_reason
GET /watchdog Watchdog state + last transition reason
GET /metrics Live SystemMetrics from ResourceCollector
POST /pause Manual pause — graceful model eviction via keep_alive=0
POST /pause/force Force pause — kills inference runner directly; use when /pause is blocked by active inference
POST /resume Manual resume
POST /load Load a specific model (unloads current first)
GET /models List all locally available models
DELETE /models/{name} Delete a model from local storage
POST /models/pull Pull (download) a model — blocks until complete
POST /start Full service start
POST /stop Graceful service shutdown
POST /stop/force Force stop — force-pauses then stops the service; returns immediately, poll /status for result
POST /restart stop → sleep(2) → start
GET /config Read current thresholds + watchdog settings
PUT /config Update thresholds at runtime (persisted to config.yaml)
GET /docs Auto-generated OpenAPI docs

Tuning RAM Thresholds

The default ram_pause_pct is 85%. On machines where the LLM model takes up a significant fraction of RAM, this may be too high — the model already holds the RAM and the watchdog never triggers.

Rule of thumb by model size on 16 GB RAM:

Model RAM estimate Suggested ram_pause_pct
3B (Q4) ~2 GB (12%) 70–75%
7B (Q4) ~5 GB (31%) 65–70%
13B (Q4) ~8 GB (50%) 60–65%
30B (Q4) ~18 GB (>100%) N/A — use GPU offloading

On Apple Silicon (M-series), CPU and GPU share unified memory. Ollama may offload layers to GPU to fit a model; the CPU/GPU layer ratio shown in the WebUI (and in /status as size_vram_mb) shows how much of the model is in each pool. Raise ram_pause_pct only if you have confirmed headroom above the model's footprint.

Hysteresis: ram_resume_pct must be lower than ram_pause_pct. The gap between them is the "dead zone" — values in this band neither trigger a pause nor allow a resume. A gap of 20–25% prevents rapid oscillation. Example: pause at 80%, resume only below 60%.


WebUI Refresh Rate

The dashboard polls /status on a configurable interval (default 5 seconds). To change it:

  • Drag the Refresh rate slider in the Thresholds section (5–60 seconds)
  • The setting is saved to localStorage and persists across sessions
  • Shorter intervals give faster feedback but add more HTTP polling overhead

For machines on battery or with constrained CPUs, 15–30s is a reasonable default.


Security

Binding to 0.0.0.0 requires an X-API-Key header. Default bind is 127.0.0.1 (no auth required locally).

Additional mitigations: TrustedHostMiddleware (DNS rebinding), strict CORS (no wildcard), subprocess with shell=False (command injection), textContent-only WebUI (XSS), provider URL validation (SSRF), user-level services only (privilege escalation).


Prerequisites

llm-valet requires Python 3.11 or newer and Ollama. Both must be present before running the installer.

Python 3.11+

Python 3.11 or newer. If you don't have Python installed, download the latest release from python.org/downloads — the website always shows the current recommended version at the top.

macOS
brew install python

Note: brew install ollama installs python@3.14 automatically as part of its dependency chain (via Apple's MLX framework). If you installed Ollama via Homebrew, Python is already present. If you installed Ollama via direct download, run the command above.

Windows

Download and install Python from python.org or the Microsoft Store. Python is not pre-installed on Windows.

During installation, check "Add Python to PATH".

Linux

Python 3 ships pre-installed on most distributions. Verify your version:

python3 --version   # must be 3.11 or newer

If below 3.11:

# Ubuntu/Debian
sudo apt install python3

Ollama

llm-valet manages Ollama — it must be installed and running before you install llm-valet.

Supported Ollama install methods (v1.0):

Method Platform Lifecycle control Notes
brew services start ollama macOS ✅ Full Recommended. launchd manages restart on crash.
Ollama.app (.dmg) macOS ⚠️ Untested Code present; not validated in v1.0.
ollama serve (manual) All ❌ None Watchdog monitors, but /start and /stop won't work.
systemd (--user) Linux ✅ Full
Windows Service Windows ✅ Full

macOS — use brew services, not ollama serve. llm-valet's /start and /stop commands use launchctl bootstrap/bootout to manage the Homebrew service. Starting Ollama with ollama serve instead bypasses launchd, so those commands will have no effect. The watchdog will still monitor and pause/resume the model, but full lifecycle control requires the service to be launchd-managed.

LAN server mode (OLLAMA_HOST=0.0.0.0). If you expose Ollama on your LAN (e.g., for remote coding or multi-machine setups), Ollama's own port (default 11434) has no built-in authentication. llm-valet secures its own API on port 8765 but does not proxy or restrict access to Ollama's port. Restrict port 11434 at the firewall or VLAN level.

Multi-machine deployments. Running one llm-valet instance per machine — each managing its own local Ollama — is a supported pattern. Each instance is independent; there is no cross-machine coordination in v1.0.

macOS

Homebrew (recommended)
brew install ollama
brew services start ollama
Direct download (Ollama.app)

Visit ollama.com and download the macOS installer. Run the .dmg, drag Ollama to Applications, and launch it from there. Ollama runs as a menu bar app and starts its local server automatically.

Note: Ollama.app lifecycle control (/start, /stop) is present in the code but not validated in v1.0. Pause/resume and watchdog monitoring work regardless of install method.

Windows

Visit ollama.com and download the Windows installer.


Verify both prerequisites are met before continuing:

python3 --version   # expect 3.11 or newer
ollama list         # should return an empty table, not an error

Install

curl -fsSL https://raw.githubusercontent.com/LegionForge/llm-valet/main/install/install.sh | bash

The installer:

  • Creates an isolated Python environment at ~/.llm-valet/
  • Writes a default config to ~/.llm-valet/config.yaml
  • Registers a user-level auto-start service (launchd on macOS, systemd on Linux)

Once installed, the WebUI is at http://localhost:8765 and the service starts automatically at login.

To uninstall:

curl -fsSL https://raw.githubusercontent.com/LegionForge/llm-valet/main/install/uninstall.sh | bash

Pass --purge to also remove your config and logs.


First-Run Setup

When you open the WebUI (http://localhost:8765) for the first time, a setup modal walks you through three steps:

Step 1 — Save your API key

A random API key is generated and displayed once. Copy it now — it will not be shown again (though it is always retrievable from ~/.llm-valet/config.yaml). The key is required for LAN access; local access from the same machine never requires it.

Step 2 — Network access

Choose who can connect:

Option Binds to Auth required
This machine only 127.0.0.1 No
Local network 0.0.0.0 Yes — X-API-Key header
Custom IP Your chosen address Yes if not 127.0.0.1

The port defaults to 8765. Click Next.

Step 3 — Confirm and apply

Review your settings and click Save & Restart. llm-valet restarts and applies the new bind address and port. The API key is shown one final time before the modal closes.

After setup, the WebUI becomes the primary control surface — status, resource bars, manual pause/resume, and threshold sliders are all there.


Quick Start

# Check status
curl http://localhost:8765/status

# Manual control
curl -X POST http://localhost:8765/pause
curl -X POST http://localhost:8765/resume

# LAN access (X-API-Key required — set api_key in config first)
curl -H "X-API-Key: your-key" -X POST http://mac-mini.local:8765/pause

Config lives at ~/.llm-valet/config.yaml.


Configuration Reference

All settings live in ~/.llm-valet/config.yaml. The file is created with safe defaults on first install. Permissions are enforced to 0600 on every write.

Full config.yaml with defaults

# Network
host: 127.0.0.1                       # Bind address. 127.0.0.1 = localhost only; 0.0.0.0 = LAN
port: 8765                             # Listen port (1024–65535)

# Provider
provider: ollama                       # LLM provider. Only "ollama" in v1.0.
ollama_url: http://127.0.0.1:11434     # Ollama API base URL (must be localhost or RFC1918)
model_name: null                       # Preferred model. null = act on whatever is loaded

# Auth
api_key: ""                            # Required when host is 0.0.0.0. Set via WebUI or manually.
key_acknowledged: false                # Set to true after the first-run setup modal completes

# CORS / trusted hosts
cors_origins: []                       # Allowed CORS origins, e.g. ["http://mac-mini.local:8765"]
extra_allowed_hosts: []                # Extra Host header values for TrustedHostMiddleware

# Logging
log_file: ~/.llm-valet/valet.log       # Rotating JSON log (5 MB × 3 backups)

# Watchdog thresholds
thresholds:
  ram_pause_pct: 85.0           # Pause when RAM usage exceeds this %
  ram_resume_pct: 60.0          # Resume only when RAM drops below this % (hysteresis)
  cpu_pause_pct: 90.0           # Pause when CPU usage exceeds this %
  cpu_sustained_seconds: 30     # CPU must exceed threshold for this many seconds before pausing
  gpu_vram_pause_pct: 85.0      # Pause when GPU VRAM exceeds this %
  pause_timeout_seconds: 120    # Grace period (seconds) before auto-resume after pressure clears
  check_interval_seconds: 10    # Watchdog poll interval (seconds, minimum 1)
  auto_resume_on_ram_pressure: true  # false = RAM-triggered pauses require manual /resume

All threshold percentages must be in the range 0–100. ram_resume_pct must be lower than ram_pause_pct — the gap between them is the hysteresis dead zone that prevents oscillation.

Thresholds can also be updated at runtime without restarting via PUT /config or the WebUI sliders — changes are persisted immediately.

Environment variable overrides

Four settings can be overridden at launch via environment variables. Env vars take precedence over config.yaml.

Variable Overrides Example
LLM_VALET_HOST host LLM_VALET_HOST=0.0.0.0
LLM_VALET_PORT port LLM_VALET_PORT=9000
LLM_VALET_API_KEY api_key LLM_VALET_API_KEY=mysecret
LLM_VALET_PROVIDER provider LLM_VALET_PROVIDER=ollama

Development

git clone https://github.com/LegionForge/llm-valet
cd llm-valet
pip install -e ".[dev]"

# Run with hot-reload
uvicorn llm_valet.api:app --host 127.0.0.1 --port 8765 --reload

Requirements: Python 3.11+ · fastapi · uvicorn · httpx · psutil · pyyaml Optional: pynvml for NVIDIA GPU metrics on Linux/Windows


Static Analysis

Four tools run before every commit. All are in [dev] dependencies and configured in pyproject.toml.

Seven tools cover linting, security SAST, type safety, dependency CVEs, broader SAST, test coverage, and commit-time enforcement.

Tool Purpose Runs
Ruff Lint + import sort pre-commit, CI
Bandit Security SAST (Python patterns) pre-commit, CI
mypy Type checking (strict mode) pre-commit, CI
pip-audit Dependency CVE scan CI
semgrep Broader SAST (FastAPI + OWASP rulesets) CI
pytest-cov Test coverage (≥80% enforced) CI
pre-commit Runs ruff + bandit + mypy on every git commit local

Installation and setup

Create a project venv — not your system Python or Anaconda. pip-audit scans installed packages; running it against Anaconda floods results with unrelated packages.

# From repo root
python -m venv .venv

# Activate — PowerShell
.venv\Scripts\Activate.ps1

# Activate — macOS / Linux / Git Bash
source .venv/bin/activate

# Install project + all dev tools
pip install -e ".[dev]" types-PyYAML

Validate the install:

python -m ruff --version        # expect: ruff 0.4.x or later
python -m bandit --version      # expect: bandit 1.7.x or later
python -m mypy --version        # expect: mypy 1.10.x or later
python -m pip_audit --version   # expect: pip-audit 2.7.x or later
python -m semgrep --version     # expect: semgrep 1.70.x or later
python -m pytest --version      # expect: pytest 8.x with cov plugin
pre-commit --version            # expect: pre-commit 3.7.x or later

If any command returns "not found", the venv is not active or the install failed. Re-run pip install -e ".[dev]" with the venv active.

Install the git hook (one-time per clone):

pre-commit install

After this, ruff + bandit + mypy run automatically on every git commit. A failed hook blocks the commit — fix the issue, re-stage, and commit again. To skip in an emergency: git commit --no-verify (use sparingly, log why).

Validate the hook is installed:

pre-commit run --all-files

All hooks should pass on a clean checkout.


Running the tools

All commands run from the repo root with the venv active.

Ruff — linting and import sorting

python -m ruff check llm_valet svcmgr

Auto-fix safe issues (formatting, import order):

python -m ruff check llm_valet svcmgr --fix

Reading the output:

llm_valet/api.py:45:5: S105 Possible hardcoded password assigned to: "api_key"
llm_valet/watchdog.py:12:1: F401 `os` imported but unused
Found 2 errors.

Format: file:line:col: CODE description

Code prefix Category Act on it?
E, W Style / formatting Yes — auto-fixable
F Pyflakes (unused imports, undefined names) Yes — real bugs
I Import order Yes — auto-fixable
S Security (bandit-style) Yes — read carefully
B Bugbear (common bugs) Yes
UP Modernisation opportunities Yes — auto-fixable
RUF Ruff-specific checks Yes

This project suppresses S603 (subprocess, shell=False reviewed) and S607 (partial executable path for system binaries). Those skips are intentional — do not remove them.

Clean output:

All checks passed!

Bandit — security SAST

python -m bandit -r llm_valet svcmgr -c pyproject.toml

Reading the output:

>> Issue: [B324:hashlib] Use of weak MD5 hash for security.
   Severity: Medium   Confidence: High
   Location: llm_valet/config.py:45
   More Info: https://bandit.readthedocs.io/en/latest/plugins/b324_hashlib.html

Triage by the intersection of Severity and Confidence:

High Confidence Medium Confidence Low Confidence
High Severity Fix immediately Investigate Review
Medium Severity Investigate Review Low priority
Low Severity Review Low priority Probably noise

To see all findings including suppressed codes:

python -m bandit -r llm_valet svcmgr -c pyproject.toml --skips ""

This project suppresses B404, B603, B607 — all subprocess-related, reviewed and confirmed safe because shell=False is enforced throughout.

Clean output:

Test results:
        No issues identified.

mypy — type checking

python -m mypy llm_valet svcmgr

Reading the output:

llm_valet/providers/ollama.py:145: error: Item "None" of "str | None" has no attribute "lower"  [union-attr]
llm_valet/config.py:68: error: Argument 1 to "setattr" has incompatible type  [arg-type]
Found 2 errors in 2 files (checked 8 source files)

Format: file:line: error: description [error-code]

Error codes relevant to correctness and security:

Code What it means Security relevance
[union-attr] Used a value that could be None without a None-check Potential crash / bypass
[arg-type] Wrong type passed to a function Logic error, silent failures
[return-value] Function returns the wrong type Silent data corruption
[attr-defined] Attribute doesn't exist on the type Likely a typo or wrong object
[no-untyped-def] Function missing type annotations Reduces audit coverage

This project runs strict mode — all functions must be annotated, all Optional accesses checked. A clean run means the type system has verified the full call graph.

# type: ignore[attr-defined] comments in svcmgr/macos.py are intentional — os.getuid() is macOS-only and mypy runs on Windows in CI; the comment documents this rather than suppressing a real error.

Clean output:

Success: no issues found in N source files

pip-audit — dependency CVEs

python -m pip_audit

Reading the output:

Name          Version  ID                   Fix Versions
------------- -------- -------------------- ------------
cryptography  41.0.0   GHSA-jfh8-c2jp-x4fc  41.0.6

For each finding:

  1. Read the advisory (the ID is a link when run with --format=columns)
  2. Check whether the vulnerable code path is reachable from llm-valet's usage
  3. Upgrade if a fix version exists: pip install "cryptography>=41.0.6"
  4. If no fix version exists, check the advisory for mitigations

Must run inside the project venv, not a global Anaconda env. Anaconda installs many packages unrelated to this project and will produce many false-positive CVEs.

Clean output:

No known vulnerabilities found


Semgrep — broader SAST

python -m semgrep --config=p/python --config=p/fastapi llm_valet/

What it checks: OWASP Top 10 patterns, FastAPI-specific issues (unprotected routes, response model leaks), async pitfalls, and hundreds of Python security patterns that Bandit doesn't cover.

Reading the output:

llm_valet/api.py
  fastapi.security.missing-auth: Route /admin has no authentication dependency
  │ @app.get("/admin")
  ╰─────────────────── llm_valet/api.py:55

Found 1 finding in 1 file.

Format: ruleset.rule-id: description followed by the offending code and location.

  • p/python rules cover general Python security patterns
  • p/fastapi rules cover framework-specific issues

Each finding links to the rule documentation explaining the attack vector. Read it before deciding whether to fix or suppress.

To suppress a specific rule on a specific line:

result = do_thing()  # nosemgrep: rule-id

Clean output:

Ran N rules on M files: 0 findings.

pytest with coverage

python -m pytest

Coverage is automatically enabled via pyproject.toml (--cov=llm_valet --cov-fail-under=80). The run fails if coverage drops below 80%.

Reading the output:

----------- coverage: platform linux, python 3.11 -----------
Name                              Stmts   Miss  Cover   Missing
---------------------------------------------------------------
llm_valet/api.py                    89      12    87%   45-52, 110
llm_valet/config.py                 48       3    94%   102-104
llm_valet/providers/ollama.py       97      18    81%   200-217
---------------------------------------------------------------
TOTAL                              234      33    86%

Columns:

  • Stmts — total executable lines
  • Miss — lines not executed by any test
  • Cover — percentage covered
  • Missing — line numbers with no test coverage

Lines in Missing are risk areas — untested code paths. For security-sensitive functions (auth, subprocess calls, config validation), these deserve tests before merging.

Run tests without failing on coverage threshold (for investigation):

python -m pytest --no-cov-on-fail --cov-fail-under=0

Run only unit tests:

python -m pytest tests/unit/

Run everything at once

With the venv active:

python -m ruff check llm_valet svcmgr && \
python -m bandit -r llm_valet svcmgr -c pyproject.toml && \
python -m mypy llm_valet svcmgr && \
python -m semgrep --config=p/python --config=p/fastapi llm_valet/ && \
python -m pytest && \
python -m pip_audit

Or via hatch (manages its own env, no manual venv activation):

hatch run lint    # ruff + bandit + mypy + semgrep
hatch run test    # pytest with coverage
hatch run audit   # pip-audit

CI (.github/workflows/ci.yml) runs all tools on every push to main and dev:

Job Tools Blocks merge?
Lint & Type Check ruff, bandit, mypy Yes
Tests & Coverage pytest-cov (≥80%) Yes
Semgrep SAST p/python + p/fastapi Yes
Dependency Audit pip-audit Yes
CodeQL security-extended queries Yes

Validating AI-generated security findings

When an AI tool (or another person) reports a vulnerability, apply this checklist before acting:

1. Verify the file and line exist

Open the cited file and go to the cited line. If the code isn't there, the finding is hallucinated.

2. Check if a tool flags it

Run Bandit and Ruff. If neither flags it, and it isn't a logic/type issue mypy would catch, the AI likely misidentified the risk. Real vulnerabilities in Python almost always have a corresponding Bandit rule.

3. Understand the architecture before accepting a fix

Read the Security section above and the threat model in CLAUDE.md. A finding that contradicts a documented design decision (e.g., "CORS is disabled" — it is, intentionally) means the AI doesn't understand the system.

4. Common false-positive patterns to reject

AI claim Why to reject
"No authentication enforcement" when auth exists AI didn't recognise the framework's dependency injection pattern
"Hardcode a default secret" as a fix Creates a shared-secret vulnerability far worse than the original
"Encrypt config with a hardcoded key" Security theater — a fixed key provides no protection
"CORS disabled = vulnerability" Empty CORS origins = same-origin only = secure default
": in model name regex = path traversal" : is Ollama's tag separator; model names go in JSON bodies, not file paths
Timeout values flagged as OWASP issues Operational parameters, not security vulnerabilities

5. Trust tool output over AI narrative

If Ruff, Bandit, and mypy are all clean, and the AI claims there is a critical vulnerability, ask the AI to cite the specific Bandit or CWE rule that applies. If it can't, the finding is likely wrong.


Built With

llm-valet stands on the shoulders of these open source projects:

Dependency Role License
FastAPI HTTP API framework — routing, middleware, OpenAPI docs generation MIT
Starlette ASGI foundation beneath FastAPI — request/response, middleware layer BSD
Pydantic Data validation — enforces types on config and API payloads MIT
uvicorn ASGI server — runs the FastAPI app, handles HTTP connections BSD
httpx Async HTTP client — all communication with the Ollama provider API BSD
psutil Cross-platform process and system metrics — RAM, CPU, process enumeration BSD
PyYAML Config file parsing (~/.llm-valet/config.yaml) MIT
pynvml NVIDIA GPU metrics on Linux/Windows (optional) BSD

Development toolchain:

Tool Role
Ruff Linting and import sorting
Bandit Python security SAST
mypy Static type checking (strict mode)
semgrep Broader SAST — OWASP + FastAPI rulesets
pip-audit Dependency CVE scanning
pytest Test runner + coverage enforcement
pre-commit Git hook runner — enforces lint/type checks before every commit

License

MIT License — Copyright (c) 2026 LegionForge · jp@legionforge.org

Attribution required: all copies and distributions must include the above copyright notice per the MIT license terms.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

legionforge_llm_valet-1.0.0.tar.gz (205.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

legionforge_llm_valet-1.0.0-py3-none-any.whl (71.0 kB view details)

Uploaded Python 3

File details

Details for the file legionforge_llm_valet-1.0.0.tar.gz.

File metadata

  • Download URL: legionforge_llm_valet-1.0.0.tar.gz
  • Upload date:
  • Size: 205.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for legionforge_llm_valet-1.0.0.tar.gz
Algorithm Hash digest
SHA256 f042a3ff69f39ba7c8bd493f6238ba01504b968026b4f7b4288b421a6719e736
MD5 5b8b06aeb9cc2297430e5ce33b2c79db
BLAKE2b-256 93ce8833bbbb45372841f9c1803c56b2a9a7ba27bad628f18b58b3891bcfed89

See more details on using hashes here.

Provenance

The following attestation bundles were made for legionforge_llm_valet-1.0.0.tar.gz:

Publisher: publish.yml on LegionForge/llm-valet

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file legionforge_llm_valet-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for legionforge_llm_valet-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e529115a30aef6c2090ea8c1a67485b09075589b065e8455c7d5222436f0601b
MD5 d65c65d47507923d0bc6a31662c61cd7
BLAKE2b-256 b8b5b4c901f3065c6e7e687a48758b8663c845dcfcd3f6a44301cc49239748bb

See more details on using hashes here.

Provenance

The following attestation bundles were made for legionforge_llm_valet-1.0.0-py3-none-any.whl:

Publisher: publish.yml on LegionForge/llm-valet

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page