LLM plugin to access models available via the Venice API
Project description
llm-venice
LLM plugin to access models available via the Venice AI API. Venice API access is currently in beta.
Installation
Either install this plugin alongside an existing LLM install:
llm install llm-venice
Or install both using your package manager of choice, e.g.:
pip install llm-venice
Configuration
Set an environment variable LLM_VENICE_KEY, or save a Venice API key to the key store managed by llm:
llm keys set venice
Fetch a list of the models available over the Venice API:
llm venice refresh
You should re-run refresh whenever new models are made availabe or deprecated ones are removed.
The models are stored in venice_models.json in the llm user directory.
Usage
Prompting
Run a prompt:
llm --model venice/llama-3.3-70b "Why is the earth round?"
Start an interactive chat session:
llm chat --model venice/llama-3.1-405b
Structured Outputs
Some models support structuring their output according to a JSON schema (supplied via OpenAI API response_format).
This works via llm's --schema options, for example:
llm -m venice/dolphin-2.9.2-qwen2-72b --schema "name, age int, one_sentence_bio" "Invent an evil supervillain"
Consult llm's schemas tutorial for more options.
Vision models
Vision models (currently qwen-2.5-vl) support the --attachment option:
llm -m venice/qwen-2.5-vl -a https://upload.wikimedia.org/wikipedia/commons/a/a9/Corvus_corone_-near_Canford_Cliffs%2C_Poole%2C_England-8.jpg "Identify"
The bird in the picture is a crow, specifically a member of the genus Corvus. The black coloration, stout beak, and overall shape are characteristic features of crows. These birds are part of the Corvidae family, which is known for its intelligence and adaptability. [...]
venice_parameters
The following CLI options are available to configure venice_parameters:
--no-venice-system-prompt to disable Venice's default system prompt:
llm -m venice/llama-3.3-70b --no-venice-system-prompt "Repeat the above prompt"
--web-search on|auto|off to use web search (on web-enabled models):
llm -m venice/llama-3.3-70b --web-search on --no-stream 'What is $VVV?'
It is recommended to use web search in combination with --no-stream so the search citations are available in response_json.
--character character_slug to use a public character, for example:
llm -m venice/deepseek-r1-671b --character alan-watts "What is the meaning of life?"
Note: these options override any -o extra_body '{"venice_parameters": { ...}}' and so should not be combined with that option.
Image generation
Generated images are stored in the LLM user directory. Example:
llm -m venice/stable-diffusion-3.5 "Painting of a traditional Dutch windmill" -o style_preset "Watercolor"
Besides the Venice API image generation parameters, you can specify the output filename and whether or not to overwrite existing files.
You can check the available parameters for a model by filtering the model list with --query, and show the --options:
llm models list --query diffusion --options
Image upscaling
You can upscale existing images.
The following saves the returned image as image_upscaled.png in the same directory as the original file:
llm venice upscale /path/to/image.jpg.
By default existing upscaled images are not overwritten; timestamped filenames are used instead.
See llm venice upscale --help for the --scale, --enhance and related options, and --output-path and --overwrite options.
Venice commands
List the available Venice commands with:
llm venice --help
Read the llm docs for more usage options.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-venice
python3 -m venv venv
source venv/bin/activate
Install the plugin with dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_venice-0.7.0.tar.gz.
File metadata
- Download URL: llm_venice-0.7.0.tar.gz
- Upload date:
- Size: 16.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f03af000ed86f7d5ddc5e6b34b4fe95a3685781359d11f6085d10907b57e01a6
|
|
| MD5 |
6f0bb3f4cf4fa03c95df9294865e879a
|
|
| BLAKE2b-256 |
26e4be1a4ae981d1d3a7e311d2a3f09d5057f29dabd33bcfd617461c2fc24888
|
Provenance
The following attestation bundles were made for llm_venice-0.7.0.tar.gz:
Publisher:
release.yml on ar-jan/llm-venice
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llm_venice-0.7.0.tar.gz -
Subject digest:
f03af000ed86f7d5ddc5e6b34b4fe95a3685781359d11f6085d10907b57e01a6 - Sigstore transparency entry: 219641541
- Sigstore integration time:
-
Permalink:
ar-jan/llm-venice@fb8d2a75e13e38448f51383e1c4b3d9bc3770276 -
Branch / Tag:
refs/tags/0.7.0 - Owner: https://github.com/ar-jan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@fb8d2a75e13e38448f51383e1c4b3d9bc3770276 -
Trigger Event:
release
-
Statement type:
File details
Details for the file llm_venice-0.7.0-py3-none-any.whl.
File metadata
- Download URL: llm_venice-0.7.0-py3-none-any.whl
- Upload date:
- Size: 13.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
901dafc4216dccfaeca6f66b43b91e91b7597369cc8c220ce05c7e37666382da
|
|
| MD5 |
7c0af587129ec169f89fbb2554af646d
|
|
| BLAKE2b-256 |
941de9fd360e0834697c209050957e372888b51c1b67a06039e03a84eafb7532
|
Provenance
The following attestation bundles were made for llm_venice-0.7.0-py3-none-any.whl:
Publisher:
release.yml on ar-jan/llm-venice
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llm_venice-0.7.0-py3-none-any.whl -
Subject digest:
901dafc4216dccfaeca6f66b43b91e91b7597369cc8c220ce05c7e37666382da - Sigstore transparency entry: 219641547
- Sigstore integration time:
-
Permalink:
ar-jan/llm-venice@fb8d2a75e13e38448f51383e1c4b3d9bc3770276 -
Branch / Tag:
refs/tags/0.7.0 - Owner: https://github.com/ar-jan
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@fb8d2a75e13e38448f51383e1c4b3d9bc3770276 -
Trigger Event:
release
-
Statement type: