Tiny AI client for LLMs. As simple as it gets.
Project description
Tiny AI Client
Inspired by tinygrad and simpleaichat, tiny-ai-client
is the easiest way to use and switch LLMs with vision and tool usage support. It works because it is tiny
, simple
and most importantly fun
to develop.
I want to change LLMs with ease, while knowing what is happening under the hood. Langchain is cool, but became bloated, complicated there is just too much chaos going on. I want to keep it simple, easy to understand and easy to use. If you want to use a LLM and have an API key, you should not need to read a 1000 lines of code and write response.choices[0].message.content
to get the response.
Simple and tiny, that's the goal.
Features:
- OpenAI
- Anthropic
- Async
- Tool usage
- Structured output
- Vision
- PyPI package
tiny-ai-client
- Gemini (vision, no tools)
- Ollama (text, no vision, no tools) (you can also pass a custom model_server_url to AI/AsyncAI)
- To use it,
model_name='ollama:llama3'
or your model name.
- To use it,
- Groq (text, tools, no vision)
- To use it
model_name='groq:llama-70b-8192'
or your model name as in Groq docs.
- To use it
Roadmap:
- Gemini tools
Simple
tiny-ai-client
is simple and intuitive:
- Do you want set your model? Just pass the model name.
- Do you want to change your model? Just change the model name.
- Want to send a message?
msg: str = ai("hello")
and say goodbye to parsing a complex json. - Do you want to use a tool? Just pass the tool as a function
- Type hint it with a single argument that inherits from
pydantic.BaseModel
and just pass the callable.AI
will call it and get its results to you if the model wants to.
- Type hint it with a single argument that inherits from
- Want to use vision? Just pass a
PIL.Image.Image
. - Video? Just pass a list of
PIL.Image.Image
.
Tiny
tiny-ai-client
is very small, its core logic is < 250 lines of code (including comments and docstrings) and ideally won't pass 500. It is and always will be easy to understand, tweak and use.- The core logic is in
tiny_ai_client/models.py
- Vision utils are in
tiny_ai_client/vision.py
- Tool usage utils are in
tiny_ai_client/tools.py
- The core logic is in
- The interfaces are implemented by subclassing
tiny_ai_client.models.LLMClientWrapper
binding it to a specific LLM provider. This logic might get bigger, but it is isolated in a single file and does not affect the core logic.
Usage
pip install tiny-ai-client
To test, set the following environment variables:
- OPENAI_API_KEY
- ANTHROPIC_API_KEY
- GROQ_API_KEY
- GOOGLE_API_KEY
Then
To run all examples:
./scripts/run-all-examples.sh
For OpenAI:
from tiny_ai_client import AI, AsyncAI
ai = AI(
model_name="gpt-4o", system="You are Spock, from Star Trek.", max_new_tokens=128
)
response = ai("What is the meaning of life?")
ai = AsyncAI(
model_name="gpt-4o", system="You are Spock, from Star Trek.", max_new_tokens=128
)
response = await ai("What is the meaning of life?")
For Anthropic:
from tiny_ai_client import AI, AsyncAI
ai = AI(
model_name="claude-3-haiku-20240307", system="You are Spock, from Star Trek.", max_new_tokens=128
)
response = ai("What is the meaning of life?")
ai = AsyncAI(
model_name="claude-3-haiku-20240307", system="You are Spock, from Star Trek.", max_new_tokens=128
)
response = await ai("What is the meaning of life?")
We also support tool usage for both. You can pass as many functions you want as type-hinted functions with a single argument that inherits from pydantic.BaseModel
. AI
will call the function and get its results to you.
from pydantic import BaseModel, Field
from tiny_ai_client import AI, AsyncAI
class WeatherParams(BaseModel):
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
unit: str = Field(
"celsius", description="Temperature unit", enum=["celsius", "fahrenheit"]
)
def get_current_weather(weather: WeatherParams):
"""
Get the current weather in a given location
"""
return f"Getting the current weather in {weather.location} in {weather.unit}."
ai = AI(
model_name="gpt-4o",
system="You are Spock, from Star Trek.",
max_new_tokens=32,
tools=[get_current_weather],
)
response = ai("What is the meaning of life?")
print(f"{response=}")
response = ai("Please get the current weather in celsius for San Francisco.")
print(f"{response=}")
response = ai("Did it work?")
print(f"{response=}")
And vision. Pass a list of PIL.Image.Image
(or a single one) and we will handle the rest.
from tiny_ai_client import AI, AsyncAI
from PIL import Image
ai = AI(
model_name="gpt-4o",
system="You are Spock, from Star Trek.",
max_new_tokens=32,
)
response = ai(
"Who is on the images?",
images[
Image.open("assets/kirk.jpg"),
Image.open("assets/spock.jpg")
]
)
print(f"{response=}")
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file tiny_ai_client-0.0.12.tar.gz
.
File metadata
- Download URL: tiny_ai_client-0.0.12.tar.gz
- Upload date:
- Size: 13.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 211f2cd5d575194d508538247c60a225d15f21854f9aaf6826d9a38791dc72a3 |
|
MD5 | 283d043a2a5e94cc9d52c1cfe36770fd |
|
BLAKE2b-256 | f1d82e6ed32a8a7fd4a3a9374b2c81ddf81083738a5a452096eed0f155911888 |
File details
Details for the file tiny_ai_client-0.0.12-py3-none-any.whl
.
File metadata
- Download URL: tiny_ai_client-0.0.12-py3-none-any.whl
- Upload date:
- Size: 16.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4f5be6f533604c2166cf6d8ced8c8f652893a59ee97b8c426e8004aa356c7f7d |
|
MD5 | 0bb7fac4c986d0f8bbdebd3255a8e33f |
|
BLAKE2b-256 | 7130f0547cbaf1da9760d05eb4a0d23a79ae36033720f003f2e5b190d91e056c |