A community-driven Python client library for interacting with the Venice.ai API, offering comprehensive access to its features.
Project description
Venice AI Python Client
Developed to benchmark and explore the full capabilities of the Venice.ai API, the venice-ai Python package has evolved into a comprehensive client library for developers. This library provides convenient access to Venice.ai's powerful features, including chat completions, image generation, audio synthesis, embeddings, and more, with support for both synchronous and asynchronous operations.
Table of Contents
Features
- Intuitive Pythonic interface for all Venice.ai API endpoints.
- Support for both synchronous and asynchronous operations.
- Comprehensive model support for Chat, Image Generation, Embeddings, etc.
- Streaming capabilities for chat completions and audio.
- Built-in utilities for tasks like token estimation.
- Robust error handling and type-hinted for a better developer experience.
- Detailed API documentation generated with Sphinx.
Getting Started
Prerequisites
Python 3.11 or higher.
Installation
You can install the Venice AI client library from PyPI:
pip install venice-ai
Alternatively, to install the latest development version from source:
git clone https://github.com/venice-ai/venice-ai-python.git
cd venice-ai-python
poetry install
API Key Setup
To use the Venice AI API, you need an API key.
The client library expects the API key to be available as an environment variable:
export VENICE_API_KEY="your_api_key_here"
Alternatively, you can pass the API key directly when initializing the client, though using environment variables is recommended for security.
Usage
Client Initialization
Synchronous Client:
from venice_ai import VeniceClient
client = VeniceClient()
# If API key is not set as an environment variable:
# client = VeniceClient(api_key="your_api_key_here")
Asynchronous Client:
import asyncio
from venice_ai import AsyncVeniceClient
async def main():
async_client = AsyncVeniceClient()
# If API key is not set as an environment variable:
# async_client = AsyncVeniceClient(api_key="your_api_key_here")
# Example: Asynchronous Chat Completion
try:
print("Attempting asynchronous chat completion...")
response = await async_client.chat.completions.create(
model="llama-3.2-3b", # Or your preferred model
messages=[{"role": "user", "content": "Hello asynchronously from Venice AI!"}]
)
if response.choices:
print("Async Chat Response:", response.choices[0].message.content)
else:
print("No response choices received.")
except Exception as e: # Catching a general exception for API or client errors
print(f"An unexpected error occurred during async chat: {e}")
finally:
print("Closing async client...")
await async_client.close()
print("Async client closed.")
if __name__ == "__main__":
asyncio.run(main())
It's important to await async_client.close() when you're finished using the asynchronous client. This ensures that underlying HTTP resources and connections are properly released, preventing potential resource leaks in your application.
Chat Completions
Non-streaming example:
from venice_ai import VeniceClient
# Ensure VENICE_API_KEY is set in your environment,
# or initialize the client with client = VeniceClient(api_key="your_api_key_here")
client = VeniceClient()
response = client.chat.completions.create(
model="llama-3.2-3b", # Or your preferred model
messages=[
{"role": "user", "content": "Hello, how are you?"}
]
)
print(response.choices[0].message.content)
Streaming example:
from venice_ai import VeniceClient
client = VeniceClient()
stream = client.chat.completions.create(
model="llama-3.2-3b",
messages=[
{"role": "user", "content": "Tell me a short story."}
],
stream=True
)
for chunk in stream:
if chunk.choices and chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
print()
Image Generation
Example:
from venice_ai import VeniceClient
import base64
from PIL import Image
import io
client = VeniceClient()
response = client.image.generate(
model="venice-sd35", # Or your preferred image model
prompt="A futuristic cityscape at sunset",
width=1024,
height=1024
)
# Assuming response.images[0] contains base64 encoded image data
if response.images:
img_b64 = response.images[0]
img_bytes = base64.b64decode(img_b64)
# To display or save the image (e.g., using Pillow)
# pil_image = Image.open(io.BytesIO(img_bytes))
# pil_image.show()
# pil_image.save("generated_image.png")
print("Image generated successfully (first image data received).")
Embeddings Creation
Example of creating embeddings for a piece of text:
from venice_ai import VeniceClient
client = VeniceClient()
try:
response = client.embeddings.create(
model="text-embedding-ada-002", # Or your preferred embeddings model
input="The Venice AI Python client makes API interaction seamless."
)
# The response.data contains a list of embedding objects.
if response.data and response.data[0].embedding:
first_embedding_vector = response.data[0].embedding
print(f"Generated embedding vector (first 5 dimensions): {first_embedding_vector[:5]}")
print(f"Total dimensions of the vector: {len(first_embedding_vector)}")
else:
print("No embedding data received.")
except Exception as e: # Catching a general exception for API or client errors
print(f"An error occurred during embedding creation: {e}")
For more detailed examples of other functionalities (Audio, Embeddings, Billing, API Keys, Characters), please refer to the Showcase Application and the official API Documentation.
Showcase Application
This project includes a Streamlit application (app.py) that demonstrates various features of the venice-ai library.
To run the showcase application:
# Ensure you have installed dev dependencies (including Streamlit):
# poetry install --with dev
poetry run streamlit run app.py
Testing
The library includes a comprehensive test suite using pytest.
To run all tests (unit, E2E, benchmarks) and generate a coverage report:
# Ensure dev dependencies are installed: poetry install --with dev
poetry run python test_runner.py --group all --coverage --html
The test runner (test_runner.py) also supports an interactive mode and options to run specific test groups or files. Run poetry run python test_runner.py --help for more options.
Documentation
Detailed API documentation for the Venice.ai API is available at: https://docs.venice.ai/api-reference
The documentation is generated using Sphinx from the docstrings within the codebase.
Contributing
Please feel free to open issues.
License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file venice_ai-1.0.0.tar.gz.
File metadata
- Download URL: venice_ai-1.0.0.tar.gz
- Upload date:
- Size: 93.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9f56be63c12d88d34a25332b025aeafa3cd4460f5603f7621be79b8f068b25c7
|
|
| MD5 |
f3240cf8e5b04dd1ae2d41795e28034a
|
|
| BLAKE2b-256 |
9afadc5ae30e1f13aec78d3f0eaec029a7fb8ec10017517bf21fa525f867358c
|
Provenance
The following attestation bundles were made for venice_ai-1.0.0.tar.gz:
Publisher:
python-publish.yaml on sethbang/venice-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
venice_ai-1.0.0.tar.gz -
Subject digest:
9f56be63c12d88d34a25332b025aeafa3cd4460f5603f7621be79b8f068b25c7 - Sigstore transparency entry: 227342841
- Sigstore integration time:
-
Permalink:
sethbang/venice-ai@9bbfea3f627839021a0cc21a24c84497f926c765 -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/sethbang
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yaml@9bbfea3f627839021a0cc21a24c84497f926c765 -
Trigger Event:
release
-
Statement type:
File details
Details for the file venice_ai-1.0.0-py3-none-any.whl.
File metadata
- Download URL: venice_ai-1.0.0-py3-none-any.whl
- Upload date:
- Size: 106.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d65d581d3784e4ce9525b97c8a821e89cb7942cd93ca2d3ae65cfc3e7e0eab92
|
|
| MD5 |
a62f634edd5f94fb1b0249ab3bb7c094
|
|
| BLAKE2b-256 |
e3d6fefbe9ead4be4c4c4d69c1fcfc78fe95de7605303740cb8a1c34419574f9
|
Provenance
The following attestation bundles were made for venice_ai-1.0.0-py3-none-any.whl:
Publisher:
python-publish.yaml on sethbang/venice-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
venice_ai-1.0.0-py3-none-any.whl -
Subject digest:
d65d581d3784e4ce9525b97c8a821e89cb7942cd93ca2d3ae65cfc3e7e0eab92 - Sigstore transparency entry: 227342842
- Sigstore integration time:
-
Permalink:
sethbang/venice-ai@9bbfea3f627839021a0cc21a24c84497f926c765 -
Branch / Tag:
refs/tags/v1.0.0 - Owner: https://github.com/sethbang
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yaml@9bbfea3f627839021a0cc21a24c84497f926c765 -
Trigger Event:
release
-
Statement type: