Skip to main content

A super lightweight library for LLM-based applications

Project description

tinyLoop Logo

A lightweight Python library for building AI-powered applications with clean function calling, vision support, and MLflow integration.

Python License PyPI

TinyLoop is fully built on top of LiteLLM, providing 100% compatibility with the LiteLLM API while adding powerful abstractions and utilities. This means you can use any model, provider, or feature that LiteLLM supports, including:

  • All LLM Providers: OpenAI, Anthropic, Google, Azure, Cohere, and 100+ more
  • All Model Types: Chat, completion, embedding, and vision models
  • Advanced Features: Streaming, function calling, structured outputs, and more
  • Ops Features: Retries, fallbacks, caching, and cost tracking

TinyLoop provides a clean, intuitive interface for working with Large Language Models (LLMs), featuring:

  • ๐ŸŽฏ Clean Function Calling: Convert Python functions to JSON tool definitions automatically
  • ๐Ÿ” MLflow Integration: Built-in tracing and monitoring with customizable span names
  • ๐Ÿ‘๏ธ Vision Support: Handle images and vision models seamlessly
  • ๐Ÿ“Š Structured Output: Generate structured data from LLM responses using Pydantic
  • ๐Ÿ”„ Tool Loops: Execute multi-step tool calling workflows
  • โšก Async Support: Full async/await support for all operations

๐Ÿ“ฆ Installation

pip install tinyloop

๐Ÿš€ Quick Start

Basic LLM Usage

Synchronous Calls

from tinyloop.inference.litellm import LLM

# Initialize the LLM
llm = LLM(model="openai/gpt-3.5-turbo", temperature=0.1)

# Simple text generation
response = llm(prompt="Hello, how are you?")
print(response)

# Get conversation history
history = llm.get_history()

# Access comprehensive response information
print(f"Response: {response}")
print(f"Cost: ${response.cost:.6f}")
print(f"Tool calls: {response.tool_calls}")
print(f"Raw response: {response.raw_response}")
print(f"Message history: {len(response.message_history)} messages")

Asynchronous Calls

from tinyloop.inference.litellm import LLM

llm = LLM(model="openai/gpt-3.5-turbo", temperature=0.1)

# Async text generation
response = await llm.acall(prompt="Hello, how are you?")
print(response)

๐Ÿ”„ Tool Loops

Execute multi-step tool calling workflows:

from tinyloop.modules.tool_loop import ToolLoop
from tinyloop.features.function_calling import Tool
from pydantic import BaseModel
import random

def roll_dice():
    """Roll a dice and return the result"""
    return random.randint(1, 6)

class FinalAnswer(BaseModel):
    last_roll: int
    reached_goal: bool

# Create tool loop
loop = ToolLoop(
    model="openai/gpt-4.1",
    system_prompt="""
    You are a dice rolling assistant.
    Roll a dice until you get the number indicated in the prompt.
    Use the roll_dice function to roll the dice.
    Return the last roll and whether you reached the goal.
    """,
    temperature=0.1,
    output_format=FinalAnswer,
    tools=[Tool(roll_dice)]
)

# Execute the loop
response = loop(
    prompt="Roll a dice until you get a 6",
    parallel_tool_calls=False,
)

print(f"Last roll: {response.last_roll}")
print(f"Reached goal: {response.reached_goal}")

Supported Features

๐ŸŽฏ Structured Output Generation

Generate structured data using Pydantic models:

from tinyloop.inference.litellm import LLM
from pydantic import BaseModel
from typing import List

class CalendarEvent(BaseModel):
    name: str
    date: str
    participants: List[str]

class EventsList(BaseModel):
    events: List[CalendarEvent]

# Initialize LLM with structured output
llm = LLM(
    model="openai/gpt-4.1-nano",
    temperature=0.1,
)

# Generate structured data
response = llm(
    prompt="List 5 important events in the XIX century",
    response_format=EventsList
)

# Access structured data
for event in response.events:
    print(f"{event.name} - {event.date}")
    print(f"Participants: {', '.join(event.participants)}")

๐Ÿ‘๏ธ Vision

Work with images using various input methods:

from tinyloop.inference.litellm import LLM
from tinyloop.features.vision import Image
from PIL import Image as PILImage

llm = LLM(model="openai/gpt-4.1-nano", temperature=0.1)

# From PIL Image
pil_image = PILImage.open("image.jpg")
image = Image.from_PIL(pil_image)

# From file path
image = Image.from_file("image.jpg")

# From URL
image = Image.from_url("https://example.com/image.jpg")

# Analyze image
response = llm(prompt="Describe this image", images=[image])
print(response)

๐Ÿ”ง Function Calling

Convert Python functions to LLM tools with automatic schema generation:

from tinyloop.inference.litellm import LLM
from tinyloop.features.function_calling import Tool
import json

def get_current_weather(location: str, unit: str):
    """Get the current weather in a given location

    Args:
        location: The city and state, e.g. San Francisco, CA
        unit: Temperature unit {'celsius', 'fahrenheit'}

    Returns:
        A sentence indicating the weather
    """
    if location == "Boston, MA":
        return "The weather is 12ยฐF"
    return f"Weather in {location} is sunny"

# Create LLM instance
llm = LLM(model="openai/gpt-4.1-nano", temperature=0.1)

# Create tool from function
weather_tool = Tool(get_current_weather)

# Use function calling
inference = llm(
    prompt="What is the weather in Boston, MA?",
    tools=[weather_tool],
)

# Process tool calls
for tool_call in inference.raw_response.choices[0].message.tool_calls:
    tool_name = tool_call.function.name
    tool_args = json.loads(tool_call.function.arguments)
    print(f"Tool: {tool_name}")
    print(f"Args: {tool_args}")
    print(weather_tool(**tool_args))

# Access comprehensive response information
print(f"Total cost: ${inference.cost:.6f}")
print(f"Tool calls made: {len(inference.tool_calls) if inference.tool_calls else 0}")
print(f"Conversation length: {len(inference.message_history)} messages")

๐Ÿ” Observability: MLflow Integration

Automatic Tracing

TinyLoop automatically integrates with MLflow for tracing:

from tinyloop.utils.mlflow import mlflow_trace

class Agent:
    @mlflow_trace(mlflow.entities.SpanType.AGENT)
    def __call__(self, prompt: str, **kwargs):
        self.llm.add_message(self.llm._prepare_user_message(prompt))
        for _ in range(self.max_iterations):
            response = self.llm(
                messages=self.llm.get_history(), tools=self.tools, **kwargs
            )
            if response.tool_calls:
                should_finish = False
                for tool_call in response.tool_calls:
                    tool_response = self.tools_map[tool_call.function_name](
                        **tool_call.args
                    )

                    self.llm.add_message(
                        self._format_tool_response(tool_call, str(tool_response))
                    )

                    if tool_call.function_name == "finish":
                        should_finish = True
                        break

                if should_finish:
                    break

        return self.llm(
            messages=self.llm.get_history(),
            response_format=self.output_format,
    )

tinyLoop Logo

๐Ÿ—๏ธ Project Structure

tinyloop/
โ”œโ”€โ”€ features/
โ”‚   โ”œโ”€โ”€ function_calling.py  # Function calling utilities
โ”‚   โ””โ”€โ”€ vision.py           # Vision model support
โ”œโ”€โ”€ inference/
โ”‚   โ”œโ”€โ”€ base.py             # Base inference classes
โ”‚   โ””โ”€โ”€ litellm.py          # LiteLLM integration
โ”œโ”€โ”€ modules/
โ”‚   โ”œโ”€โ”€ base_loop.py        # Base loop implementation
โ”‚   โ”œโ”€โ”€ generate.py         # Generation modules
โ”‚   โ””โ”€โ”€ tool_loop.py        # Tool execution loop
โ””โ”€โ”€ utils/
    โ””โ”€โ”€ mlflow.py           # MLflow utilities

๐Ÿงช Development

Running Tests

# Run all tests
pytest tests/

# Run specific test file
pytest tests/test_function_calling.py -v

# Run with coverage
pytest tests/ --cov=tinyloop

Examples

Check out the Jupyter notebooks for more detailed examples:

๐Ÿค Contributing

We welcome contributions! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


Made with โค๏ธ for the AI community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tinyloop-0.1.22.tar.gz (546.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tinyloop-0.1.22-py3-none-any.whl (18.2 kB view details)

Uploaded Python 3

File details

Details for the file tinyloop-0.1.22.tar.gz.

File metadata

  • Download URL: tinyloop-0.1.22.tar.gz
  • Upload date:
  • Size: 546.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.12

File hashes

Hashes for tinyloop-0.1.22.tar.gz
Algorithm Hash digest
SHA256 62d71e26c8cf02db6a5bf9f133e1448db3410c4feb6de10547c1f578175a76fc
MD5 9e98a0404c707d498b2d25733f988d2d
BLAKE2b-256 8d519f241f40c5d46eeae3b9952172ce5e3a10d8ccd9e751d2fbc333e54d2aee

See more details on using hashes here.

File details

Details for the file tinyloop-0.1.22-py3-none-any.whl.

File metadata

  • Download URL: tinyloop-0.1.22-py3-none-any.whl
  • Upload date:
  • Size: 18.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.12

File hashes

Hashes for tinyloop-0.1.22-py3-none-any.whl
Algorithm Hash digest
SHA256 dbfb0d2bc7b29e3e87229d5406f91fe070fc5c4e87b34b2b78de55987ea65b62
MD5 6a303add0be2ae9840ba0d2517013705
BLAKE2b-256 0b11e85e7184a55235081c056991e1ac6016118d2671cdf60f099c6df1375f35

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page