Skip to main content

A Python library for building AI-powered applications.

Project description

Spice

Usage Examples

All examples can be found in scripts/run.py

from spice import Spice

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "list 5 random words"},
]

client = Spice()
response = await client.call_llm(messages=messages, model="gpt-4-0125-preview")
print(response.text)

Streaming

# you can set a default model for the client instead of passing it with each call
client = Spice(model="claude-3-opus-20240229")

# the messages format is automatically converted for the Anthropic API
response = await client.call_llm(messages=messages, stream=True)

# spice wraps the stream to extract just the text you care about
async for text in response.stream():
    print(text, end="", flush=True)

# response always includes the final text, no need build it from the stream yourself
print(response.text)

# response also includes helpful stats
print(f"Took {response.total_time:.2f}s")
print(f"Time to first token: {response.time_to_first_token:.2f}s")
print(f"Input/Output tokens: {response.input_tokens}/{response.output_tokens}")

Mixing Providers

# alias model names for easy configuration, even mixing providers
model_aliases = {
    "task1_model": {"model": "gpt-4-0125-preview"},
    "task2_model": {"model": "claude-3-opus-20240229"},
    "task3_model": {"model": "claude-3-haiku-20240307"},
}

client = Spice(model_aliases=model_aliases)

responses = await asyncio.gather(
    client.call_llm(messages=messages, model="task1_model"),
    client.call_llm(messages=messages, model="task2_model"),
    client.call_llm(messages=messages, model="task3_model"),
)

for i, response in enumerate(responses, 1):
    print(f"\nModel {i} response:")
    print(response.text)
    print(f"Characters per second: {response.characters_per_second:.2f}")

Logging Callbacks

client = Spice(model="gpt-3.5-turbo-0125")

# pass a logging function to get a callback after the stream completes
response = await client.call_llm(
    messages=messages, stream=True, logging_callback=lambda response: print(response.text)
)

async for text in response.stream():
    print(text, end="", flush=True)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spiceai-0.1.7.tar.gz (17.1 kB view hashes)

Uploaded Source

Built Distribution

spiceai-0.1.7-py2.py3-none-any.whl (14.2 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page