Tiny AI client for LLMs. As simple as it gets.
Project description
Tiny AI Client
Inspired by tinygrad and simpleaichat, tiny-ai-client is the easiest way to use and switch LLMs with vision and tool usage support. It works because it is tiny, simple and most importantly fun to develop.
I want to change LLMs with ease, while knowing what is happening under the hood. Langchain is cool, but became bloated, complicated there is just too much chaos going on. I want to keep it simple, easy to understand and easy to use. If you want to use a LLM and have an API key, you should not need to read a 1000 lines of code and write response.choices[0].message.content to get the response.
Simple and tiny, that's the goal.
Features:
- OpenAI
- Anthropic
- Async
- Tool usage
- Structured output
- Vision
- PyPI package
tiny-ai-client - Gemini (vision, no tools)
- Ollama (text, no vision, no tools) (you can also pass a custom model_server_url to AI/AsyncAI)
- To use it,
model_name='ollama:llama3'or your model name.
- To use it,
- Groq (text, tools, no vision)
- To use it
model_name='groq:llama-70b-8192'or your model name as in Groq docs.
- To use it
Roadmap:
- Gemini tools
Simple
tiny-ai-client is simple and intuitive:
- Do you want set your model? Just pass the model name.
- Do you want to change your model? Just change the model name.
- Want to send a message?
msg: str = ai("hello")and say goodbye to parsing a complex json. - Do you want to use a tool? Just pass the tool as a function
- Type hint it with a single argument that inherits from
pydantic.BaseModeland just pass the callable.AIwill call it and get its results to you if the model wants to.
- Type hint it with a single argument that inherits from
- Want to use vision? Just pass a
PIL.Image.Image. - Video? Just pass a list of
PIL.Image.Image.
Tiny
tiny-ai-clientis very small, its core logic is < 250 lines of code (including comments and docstrings) and ideally won't pass 500. It is and always will be easy to understand, tweak and use.- The core logic is in
tiny_ai_client/models.py - Vision utils are in
tiny_ai_client/vision.py - Tool usage utils are in
tiny_ai_client/tools.py
- The core logic is in
- The interfaces are implemented by subclassing
tiny_ai_client.models.LLMClientWrapperbinding it to a specific LLM provider. This logic might get bigger, but it is isolated in a single file and does not affect the core logic.
Usage
pip install tiny-ai-client
To test, set the following environment variables:
- OPENAI_API_KEY
- ANTHROPIC_API_KEY
- GROQ_API_KEY
- GOOGLE_API_KEY
Then
To run all examples:
./scripts/run-all-examples.sh
For OpenAI:
from tiny_ai_client import AI, AsyncAI
ai = AI(
model_name="gpt-4o", system="You are Spock, from Star Trek.", max_new_tokens=128
)
response = ai("What is the meaning of life?")
ai = AsyncAI(
model_name="gpt-4o", system="You are Spock, from Star Trek.", max_new_tokens=128
)
response = await ai("What is the meaning of life?")
For Anthropic:
from tiny_ai_client import AI, AsyncAI
ai = AI(
model_name="claude-3-haiku-20240307", system="You are Spock, from Star Trek.", max_new_tokens=128
)
response = ai("What is the meaning of life?")
ai = AsyncAI(
model_name="claude-3-haiku-20240307", system="You are Spock, from Star Trek.", max_new_tokens=128
)
response = await ai("What is the meaning of life?")
We also support tool usage for both. You can pass as many functions you want as type-hinted functions with a single argument that inherits from pydantic.BaseModel. AI will call the function and get its results to you.
from pydantic import BaseModel, Field
from tiny_ai_client import AI, AsyncAI
class WeatherParams(BaseModel):
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
unit: str = Field(
"celsius", description="Temperature unit", enum=["celsius", "fahrenheit"]
)
def get_current_weather(weather: WeatherParams):
"""
Get the current weather in a given location
"""
return f"Getting the current weather in {weather.location} in {weather.unit}."
ai = AI(
model_name="gpt-4o",
system="You are Spock, from Star Trek.",
max_new_tokens=32,
tools=[get_current_weather],
)
response = ai("What is the meaning of life?")
print(f"{response=}")
response = ai("Please get the current weather in celsius for San Francisco.")
print(f"{response=}")
response = ai("Did it work?")
print(f"{response=}")
And vision. Pass a list of PIL.Image.Image (or a single one) and we will handle the rest.
from tiny_ai_client import AI, AsyncAI
from PIL import Image
ai = AI(
model_name="gpt-4o",
system="You are Spock, from Star Trek.",
max_new_tokens=32,
)
response = ai(
"Who is on the images?",
images[
Image.open("assets/kirk.jpg"),
Image.open("assets/spock.jpg")
]
)
print(f"{response=}")
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tiny_ai_client-0.0.13.tar.gz.
File metadata
- Download URL: tiny_ai_client-0.0.13.tar.gz
- Upload date:
- Size: 12.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
10eadfe8d869ff0091952011402840041ab0f2dbfe8174436c6b0a01af71740f
|
|
| MD5 |
28fa6b16cf9c8d57a19cbf32c3233daf
|
|
| BLAKE2b-256 |
ecc044219d10aa642fbdbff057995445565dd79af8197a0349b60ec9920ac592
|
Provenance
The following attestation bundles were made for tiny_ai_client-0.0.13.tar.gz:
Publisher:
pythonpublish.yml on piEsposito/tiny-ai-client
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
tiny_ai_client-0.0.13.tar.gz -
Subject digest:
10eadfe8d869ff0091952011402840041ab0f2dbfe8174436c6b0a01af71740f - Sigstore transparency entry: 157939849
- Sigstore integration time:
-
Permalink:
piEsposito/tiny-ai-client@03543a25cc1760c24d17671e6b8e97c4a6f769ad -
Branch / Tag:
refs/tags/0.0.13 - Owner: https://github.com/piEsposito
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pythonpublish.yml@03543a25cc1760c24d17671e6b8e97c4a6f769ad -
Trigger Event:
release
-
Statement type:
File details
Details for the file tiny_ai_client-0.0.13-py3-none-any.whl.
File metadata
- Download URL: tiny_ai_client-0.0.13-py3-none-any.whl
- Upload date:
- Size: 15.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4afaf03b38e023f07a53cd78bbbe349390744f06106e7cc642f6c00ad12bb1f1
|
|
| MD5 |
6ccd7894566e3c89c8ab9c7b28a80fc4
|
|
| BLAKE2b-256 |
774a160cf596f8a22c014eb0408c4e562554ba3b90ed47ea0230e7de2a2e833d
|
Provenance
The following attestation bundles were made for tiny_ai_client-0.0.13-py3-none-any.whl:
Publisher:
pythonpublish.yml on piEsposito/tiny-ai-client
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
tiny_ai_client-0.0.13-py3-none-any.whl -
Subject digest:
4afaf03b38e023f07a53cd78bbbe349390744f06106e7cc642f6c00ad12bb1f1 - Sigstore transparency entry: 157939850
- Sigstore integration time:
-
Permalink:
piEsposito/tiny-ai-client@03543a25cc1760c24d17671e6b8e97c4a6f769ad -
Branch / Tag:
refs/tags/0.0.13 - Owner: https://github.com/piEsposito
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pythonpublish.yml@03543a25cc1760c24d17671e6b8e97c4a6f769ad -
Trigger Event:
release
-
Statement type: