A super lightweight library for LLM-based applications
Project description
A lightweight Python library for building AI-powered applications with clean function calling, vision support, and MLflow integration.
TinyLoop is fully built on top of LiteLLM, providing 100% compatibility with the LiteLLM API while adding powerful abstractions and utilities. This means you can use any model, provider, or feature that LiteLLM supports, including:
- All LLM Providers: OpenAI, Anthropic, Google, Azure, Cohere, and 100+ more
- All Model Types: Chat, completion, embedding, and vision models
- Advanced Features: Streaming, function calling, structured outputs, and more
- Ops Features: Retries, fallbacks, caching, and cost tracking
TinyLoop provides a clean, intuitive interface for working with Large Language Models (LLMs), featuring:
- ๐ฏ Clean Function Calling: Convert Python functions to JSON tool definitions automatically
- ๐ MLflow Integration: Built-in tracing and monitoring with customizable span names
- ๐๏ธ Vision Support: Handle images and vision models seamlessly
- ๐ Structured Output: Generate structured data from LLM responses using Pydantic
- ๐ Tool Loops: Execute multi-step tool calling workflows
- โก Async Support: Full async/await support for all operations
๐ฆ Installation
pip install tinyloop
๐ Quick Start
Basic LLM Usage
Synchronous Calls
from tinyloop.inference.litellm import LLM
# Initialize the LLM
llm = LLM(model="openai/gpt-3.5-turbo", temperature=0.1)
# Simple text generation
response = llm(prompt="Hello, how are you?")
print(response)
# Get conversation history
history = llm.get_history()
# Access comprehensive response information
print(f"Response: {response}")
print(f"Cost: ${response.cost:.6f}")
print(f"Tool calls: {response.tool_calls}")
print(f"Raw response: {response.raw_response}")
print(f"Message history: {len(response.message_history)} messages")
Asynchronous Calls
from tinyloop.inference.litellm import LLM
llm = LLM(model="openai/gpt-3.5-turbo", temperature=0.1)
# Async text generation
response = await llm.acall(prompt="Hello, how are you?")
print(response)
๐ Tool Loops
Execute multi-step tool calling workflows:
from tinyloop.modules.tool_loop import ToolLoop
from tinyloop.features.function_calling import Tool
from pydantic import BaseModel
import random
def roll_dice():
"""Roll a dice and return the result"""
return random.randint(1, 6)
class FinalAnswer(BaseModel):
last_roll: int
reached_goal: bool
# Create tool loop
loop = ToolLoop(
model="openai/gpt-4.1",
system_prompt="""
You are a dice rolling assistant.
Roll a dice until you get the number indicated in the prompt.
Use the roll_dice function to roll the dice.
Return the last roll and whether you reached the goal.
""",
temperature=0.1,
output_format=FinalAnswer,
tools=[Tool(roll_dice)]
)
# Execute the loop
response = loop(
prompt="Roll a dice until you get a 6",
parallel_tool_calls=False,
)
print(f"Last roll: {response.last_roll}")
print(f"Reached goal: {response.reached_goal}")
Supported Features
๐ฏ Structured Output Generation
Generate structured data using Pydantic models:
from tinyloop.inference.litellm import LLM
from pydantic import BaseModel
from typing import List
class CalendarEvent(BaseModel):
name: str
date: str
participants: List[str]
class EventsList(BaseModel):
events: List[CalendarEvent]
# Initialize LLM with structured output
llm = LLM(
model="openai/gpt-4.1-nano",
temperature=0.1,
)
# Generate structured data
response = llm(
prompt="List 5 important events in the XIX century",
response_format=EventsList
)
# Access structured data
for event in response.events:
print(f"{event.name} - {event.date}")
print(f"Participants: {', '.join(event.participants)}")
๐๏ธ Vision
Work with images using various input methods:
from tinyloop.inference.litellm import LLM
from tinyloop.features.vision import Image
from PIL import Image as PILImage
llm = LLM(model="openai/gpt-4.1-nano", temperature=0.1)
# From PIL Image
pil_image = PILImage.open("image.jpg")
image = Image.from_PIL(pil_image)
# From file path
image = Image.from_file("image.jpg")
# From URL
image = Image.from_url("https://example.com/image.jpg")
# Analyze image
response = llm(prompt="Describe this image", images=[image])
print(response)
๐ง Function Calling
Convert Python functions to LLM tools with automatic schema generation:
from tinyloop.inference.litellm import LLM
from tinyloop.features.function_calling import Tool
import json
def get_current_weather(location: str, unit: str):
"""Get the current weather in a given location
Args:
location: The city and state, e.g. San Francisco, CA
unit: Temperature unit {'celsius', 'fahrenheit'}
Returns:
A sentence indicating the weather
"""
if location == "Boston, MA":
return "The weather is 12ยฐF"
return f"Weather in {location} is sunny"
# Create LLM instance
llm = LLM(model="openai/gpt-4.1-nano", temperature=0.1)
# Create tool from function
weather_tool = Tool(get_current_weather)
# Use function calling
inference = llm(
prompt="What is the weather in Boston, MA?",
tools=[weather_tool],
)
# Process tool calls
for tool_call in inference.raw_response.choices[0].message.tool_calls:
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
print(f"Tool: {tool_name}")
print(f"Args: {tool_args}")
print(weather_tool(**tool_args))
# Access comprehensive response information
print(f"Total cost: ${inference.cost:.6f}")
print(f"Tool calls made: {len(inference.tool_calls) if inference.tool_calls else 0}")
print(f"Conversation length: {len(inference.message_history)} messages")
๐ Observability: MLflow Integration
Automatic Tracing
TinyLoop automatically integrates with MLflow for tracing:
from tinyloop.utils.mlflow import mlflow_trace
class Agent:
@mlflow_trace(mlflow.entities.SpanType.AGENT)
def __call__(self, prompt: str, **kwargs):
self.llm.add_message(self.llm._prepare_user_message(prompt))
for _ in range(self.max_iterations):
response = self.llm(
messages=self.llm.get_history(), tools=self.tools, **kwargs
)
if response.tool_calls:
should_finish = False
for tool_call in response.tool_calls:
tool_response = self.tools_map[tool_call.function_name](
**tool_call.args
)
self.llm.add_message(
self._format_tool_response(tool_call, str(tool_response))
)
if tool_call.function_name == "finish":
should_finish = True
break
if should_finish:
break
return self.llm(
messages=self.llm.get_history(),
response_format=self.output_format,
)
๐๏ธ Project Structure
tinyloop/
โโโ features/
โ โโโ function_calling.py # Function calling utilities
โ โโโ vision.py # Vision model support
โโโ inference/
โ โโโ base.py # Base inference classes
โ โโโ litellm.py # LiteLLM integration
โโโ modules/
โ โโโ base_loop.py # Base loop implementation
โ โโโ generate.py # Generation modules
โ โโโ tool_loop.py # Tool execution loop
โโโ utils/
โโโ mlflow.py # MLflow utilities
๐งช Development
Running Tests
# Run all tests
pytest tests/
# Run specific test file
pytest tests/test_function_calling.py -v
# Run with coverage
pytest tests/ --cov=tinyloop
Examples
Check out the Jupyter notebooks for more detailed examples:
basic_usage.ipynb- Basic usage examplesmodules.ipynb- Advanced module usage
๐ค Contributing
We welcome contributions! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tinyloop-0.1.27.tar.gz.
File metadata
- Download URL: tinyloop-0.1.27.tar.gz
- Upload date:
- Size: 552.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
877cf0bcb1a120b09034feb07eacd6c71549d70d5a5ce414f3aa5c532777e466
|
|
| MD5 |
c42f12d0269418c159e000cadf540f7b
|
|
| BLAKE2b-256 |
9285abdb67c08c8041cd7c7ae37b533c3b0709b377f5b964076e08d1ff855ab8
|
File details
Details for the file tinyloop-0.1.27-py3-none-any.whl.
File metadata
- Download URL: tinyloop-0.1.27-py3-none-any.whl
- Upload date:
- Size: 19.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b8b3c05b266392834a047a4ff1a4b613de8244533b69b2e9f3cb10ecaba72d47
|
|
| MD5 |
71d6284d5bbc2e5ba067635345a76e91
|
|
| BLAKE2b-256 |
df7528ac842ad62fa8a73dbaa7ccf01e1676c28389cad0a0cdd79239a3c8c234
|