A Python library for building AI-powered applications.
Project description
Spice
Usage Examples
All examples can be found in scripts/run.py
from spice import Spice
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "list 5 random words"},
]
client = Spice()
response = await client.call_llm(messages=messages, model="gpt-4-0125-preview")
print(response.text)
Streaming
# you can set a default model for the client instead of passing it with each call
client = Spice(model="claude-3-opus-20240229")
# the messages format is automatically converted for the Anthropic API
response = await client.call_llm(messages=messages, stream=True)
# spice wraps the stream to extract just the text you care about
async for text in response.stream():
print(text, end="", flush=True)
# response always includes the final text, no need build it from the stream yourself
print(response.text)
# response also includes helpful stats
print(f"Took {response.total_time:.2f}s")
print(f"Time to first token: {response.time_to_first_token:.2f}s")
print(f"Input/Output tokens: {response.input_tokens}/{response.output_tokens}")
Mixing Providers
# alias model names for easy configuration, even mixing providers
model_aliases = {
"task1_model": {"model": "gpt-4-0125-preview"},
"task2_model": {"model": "claude-3-opus-20240229"},
"task3_model": {"model": "claude-3-haiku-20240307"},
}
client = Spice(model_aliases=model_aliases)
responses = await asyncio.gather(
client.call_llm(messages=messages, model="task1_model"),
client.call_llm(messages=messages, model="task2_model"),
client.call_llm(messages=messages, model="task3_model"),
)
for i, response in enumerate(responses, 1):
print(f"\nModel {i} response:")
print(response.text)
print(f"Characters per second: {response.characters_per_second:.2f}")
Logging Callbacks
client = Spice(model="gpt-3.5-turbo-0125")
# pass a logging function to get a callback after the stream completes
response = await client.call_llm(
messages=messages, stream=True, logging_callback=lambda response: print(response.text)
)
async for text in response.stream():
print(text, end="", flush=True)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
spiceai-0.1.7.tar.gz
(17.1 kB
view hashes)
Built Distribution
Close
Hashes for spiceai-0.1.7-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 59d690f83e927a541f0a73c8cc28a8b127a4a1bb5c627df826fc2640300f4c0b |
|
MD5 | e5afa6022e5af95779faf2036d549e95 |
|
BLAKE2b-256 | dbdd37d16d472397daadcaa4aaa2cef999d8e067393a3403d1a7b887eaa39202 |