Call LLM as easily as calling a taxi.
Project description
llm-taxi
Installation
pip install llm-taxi
Usage
Use as a library
import asyncio
from llm_taxi.conversation import Message, Role
from llm_taxi.factory import llm
async def main():
client = llm(model="openai:gpt-3.5-turbo")
messages = [
Message(role=Role.User, content="What is the capital of France?"),
]
response = await client.response(messages)
print(response)
client = llm(model="mistral:mistral-small")
messages = [
Message(role=Role.User, content="Tell me a joke."),
]
response = await client.streaming_response(messages)
async for chunk in response:
print(chunk, end="", flush=True)
if __name__ == "__main__":
asyncio.run(main())
Command line interface
llm-taxi --model openai:gpt-3.5-turbo-16k
See all supported arguments
llm-taxi --help
Supported LLM Providers
- Anthropic
- DashScope
- DeepInfra
- DeepSeek
- Groq
- Mistral
- OpenAI
- OpenRouter
- Perplexity
- Together
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llm_taxi-0.3.1.tar.gz
(8.3 kB
view hashes)
Built Distribution
llm_taxi-0.3.1-py3-none-any.whl
(17.1 kB
view hashes)