Tool calling runtime for text-only LLMs with LangChain support
Project description
๐ ๏ธ LLM Tool Runtime
A lightweight, model-agnostic tool calling runtime for text-only LLMs. Works with any language model through LangChain or custom callables.
โจ Features
- ๐ง Simple Tool Registration - Use
@runtime.tooldecorator to register any Python function - ๐ Automatic Retry Loop - Handles tool call failures gracefully with configurable retries
- ๐ Model Agnostic - Works with OpenAI, Anthropic, Google, Ollama, and any LLM
- ๐ก๏ธ Safe Parsing - Robust JSON extraction from LLM outputs
- ๐ Type Conversion - Automatic argument type conversion based on function signatures
- ๐งช Fully Testable - Mock LLMs included for testing without API calls
- ๐ฆ Zero Dependencies - Core package has no required dependencies
๐ฆ Installation
From Source (Development)
# Clone the repository
git clone https://github.com/Anky9972/llm_tool_runtime.git
cd llm-tool-runtime
# Create virtual environment
python -m venv .venv
# Activate (Windows)
.venv\Scripts\activate
# Activate (Linux/Mac)
source .venv/bin/activate
# Install in development mode with your preferred provider
pip install -e ".[dev]" # Just dev tools (pytest)
pip install -e ".[google]" # Google Gemini/Gemma
pip install -e ".[openai]" # OpenAI GPT models
pip install -e ".[ollama]" # Ollama (local models)
pip install -e ".[all]" # All providers
From PyPI (Coming Soon)
pip install llm-tool-runtime
pip install llm-tool-runtime[google] # With Google support
๐ Quick Start
1. Basic Usage with Google Gemini
import os
from llm_tool_runtime import ToolRuntime
from langchain_google_genai import ChatGoogleGenerativeAI
# Set your API key
os.environ["GOOGLE_API_KEY"] = "your-api-key"
# Initialize runtime with any LangChain model
llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash")
runtime = ToolRuntime(llm)
# Register tools using the decorator
@runtime.tool
def add(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
@runtime.tool
def get_weather(city: str) -> str:
"""Get current weather for a city."""
# Your weather API logic here
return f"Weather in {city}: Sunny, 25ยฐC"
# Run with natural language
result = runtime.run("What is 15 + 27?")
print(result) # "The result of 15 + 27 is 42."
result = runtime.run("What's the weather in Tokyo?")
print(result) # "The weather in Tokyo is Sunny, 25ยฐC."
2. With OpenAI
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
runtime = ToolRuntime(llm, verbose=True)
@runtime.tool
def search(query: str) -> str:
"""Search the web."""
return f"Results for: {query}"
runtime.run("Search for Python tutorials")
3. With Ollama (Local, Free!)
from langchain_ollama import ChatOllama
# No API key needed - runs locally
llm = ChatOllama(model="llama3.2")
runtime = ToolRuntime(llm)
@runtime.tool
def calculate(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
runtime.run("Calculate 2 ** 10")
4. With Any Custom LLM
import requests
from llm_tool_runtime import ToolRuntime
def my_llm(system_prompt: str, user_prompt: str) -> str:
"""Custom LLM that calls any API."""
response = requests.post("https://your-api.com/chat", json={
"system": system_prompt,
"user": user_prompt
})
return response.json()["text"]
runtime = ToolRuntime(my_llm)
@runtime.tool
def greet(name: str) -> str:
return f"Hello, {name}!"
runtime.run("Say hello to Alice")
๐ API Reference
ToolRuntime
The main class for managing tools and executing LLM interactions.
ToolRuntime(
llm, # LangChain model or callable(system, user) -> str
max_retries: int = 3, # Max tool call retry attempts
verbose: bool = False # Print debug information
)
Methods:
| Method | Description |
|---|---|
tool(fn) |
Decorator to register a function as a tool |
run(prompt) |
Execute the tool calling loop |
run_with_history(prompt, history) |
Run with conversation context |
@runtime.tool Decorator
# Simple registration
@runtime.tool
def my_tool(arg: str) -> str:
"""Tool description (used in prompt)."""
return "result"
# With custom description
@runtime.tool(description="Custom description for the LLM")
def another_tool(x: int, y: int) -> int:
return x + y
Conversation History
history = []
response, history = runtime.run_with_history("What's 5 + 3?", history)
# history = [("What's 5 + 3?", "The result is 8.")]
response, history = runtime.run_with_history("Multiply that by 2", history)
# Uses context from previous exchange
๐ค Multi-Step Chaining (Agents)
Turn your LLM into an autonomous agent that can "think" through problems. The runtime supports multi-step execution loops (ReAct pattern), allowing the model to call tools, see the results, and then decide the next action.
How to Enable
Simply set max_steps when initializing the runtime (default is 5).
# Allow up to 10 sequential tool calls
runtime = ToolRuntime(llm, max_steps=10)
Example: Researcher Agent
The model needs to find a ticker symbol first, then use it to check the price.
@runtime.tool
def find_ticker(company: str) -> str:
"""Finds the stock symbol for a company."""
if "apple" in company.lower(): return "AAPL"
return "UNKNOWN"
@runtime.tool
def get_price(ticker: str) -> float:
"""Gets the current price for a ticker."""
if ticker == "AAPL": return 185.50
return 0.0
# User asks complex question
# Runtime automatically does: find_ticker("apple") -> "AAPL" -> get_price("AAPL") -> 185.50
answer = runtime.run_safe("How much is Apple's stock?")
print(answer) # "Apple's stock is currently $185.50."
๐ง How It Works
User: "What's 15 + 27?"
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ 1. Build system prompt with tool definitions โ
โ 2. Send to LLM โ
โ 3. LLM responds: โ
โ <tool_call> โ
โ {"name": "add", "arguments": {"a": 15...}}โ
โ </tool_call> โ
โ 4. Parse tool call from response โ
โ 5. Execute: add(15, 27) โ 42 โ
โ 6. Send result back to LLM โ
โ 7. LLM provides final answer โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
Response: "The sum of 15 and 27 is 42."
Tool Call Format
The runtime instructs LLMs to respond with:
<tool_call>
{"name": "function_name", "arguments": {"arg1": "value1"}}
</tool_call>
๐ก๏ธ Error Handling
The runtime includes a robust error handling system to ensure your application stays stable.
Safe Execution (Recommended for Production)
Use run_safe() to handle errors gracefully without crashing your app. It catches API connection issues, rate limits, and authentication errors automatically.
# Returns a friendly string instead of raising an exception
response = runtime.run_safe("What is 25 + 17?")
# You can customize the default error message
response = runtime.run_safe(
"Complexity query...",
default="I apologize, but I'm having trouble connecting right now."
)
Catching Specific Errors
For more control, you can catch specific exceptions:
from llm_tool_runtime import (
ToolRuntime,
InvalidAPIKeyError,
RateLimitError,
LLMConnectionError
)
try:
result = runtime.run("My prompt")
except InvalidAPIKeyError:
print("Please check your API key")
except RateLimitError:
print("System is busy, please try again later")
except LLMConnectionError as e:
print(f"Connection failed: {e}")
except MaxRetriesExceededError:
print("Failed to get a valid response after multiple attempts")
๐ Supported Models
Works with any LLM that can follow instructions. Tested with:
| Provider | Models | Package |
|---|---|---|
| Gemini 1.5/2.0, Gemma 3 | langchain-google-genai |
|
| OpenAI | GPT-4o, GPT-4, o1 | langchain-openai |
| Anthropic | Claude 3.5 Sonnet/Opus | langchain-anthropic |
| Ollama | Llama 3, Mistral, Qwen | langchain-ollama |
| Groq | Llama, Mixtral (fast!) | langchain-groq |
| DeepSeek | DeepSeek Chat/Coder | langchain-openai (custom base_url) |
| Together AI | Open source models | langchain-together |
| AWS Bedrock | Claude, Titan, Llama | langchain-aws |
๐งช Testing
Run All Tests
# Activate virtual environment
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac
# Run tests
pytest -v
# With coverage
pytest --cov=llm_tool_runtime --cov-report=html
Testing Without Real LLM
The package includes mock LLMs for testing:
from llm_tool_runtime import ToolRuntime
from tests.mock_llm import StatefulMockLLM
def test_my_tool():
mock = StatefulMockLLM()
runtime = ToolRuntime(mock)
@runtime.tool
def add(a: int, b: int) -> int:
return a + b
result = runtime.run("Add 2 and 3")
assert "5" in result
๐ Project Structure
llm_tool_runtime/
โโโ llm_tool_runtime/ # Main package
โ โโโ __init__.py # Package exports
โ โโโ runtime.py # Core ToolRuntime class
โ โโโ registry.py # Tool registration and management
โ โโโ prompt.py # System prompt builder
โ โโโ parser.py # Tool call JSON parser
โ โโโ errors.py # Custom exceptions
โ โโโ types.py # Type definitions
โ
โโโ tests/ # Test suite
โ โโโ __init__.py
โ โโโ mock_llm.py # Mock LLMs for testing
โ โโโ test_add_tool.py # Runtime tests
โ โโโ test_parser.py # Parser tests
โ โโโ test_registry.py # Registry tests
โ
โโโ .env.example # Environment template
โโโ .gitignore # Git ignore rules
โโโ example.py # Working example script
โโโ pyproject.toml # Package configuration
โโโ README.md # This file
โโโ LICENSE # MIT License
๐ Environment Variables
Create a .env file (never commit this!):
# Google Gemini
GOOGLE_API_KEY=your-google-api-key
# OpenAI
OPENAI_API_KEY=your-openai-api-key
# Anthropic
ANTHROPIC_API_KEY=your-anthropic-api-key
๐ค Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes
- Run tests:
pytest -v - Commit:
git commit -m 'Add amazing feature' - Push:
git push origin feature/amazing-feature - Open a Pull Request
Development Setup
# Clone your fork
git clone https://github.com/Anky9972/llm_tool_runtime.git
cd llm-tool-runtime
# Create virtual environment
python -m venv .venv
.venv\Scripts\activate
# Install dev dependencies
pip install -e ".[dev,all]"
# Run tests
pytest -v
๐ License
MIT License - see LICENSE file.
๐ Acknowledgments
- Built with LangChain for model integrations
- Inspired by OpenAI's function calling and Anthropic's tool use
๐ฌ Support
- ๐ Bug Reports: Open an issue
- ๐ก Feature Requests: Open an issue
- ๐ง Contact: ankygaur9972@gmail.com
Made with โค๏ธ for the AI community
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_tool_runtime-0.1.0.tar.gz.
File metadata
- Download URL: llm_tool_runtime-0.1.0.tar.gz
- Upload date:
- Size: 22.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.21 {"installer":{"name":"uv","version":"0.9.21","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c82d424a6a610b6a73204cf71ee2c710746650a738abb251d467c82297d05044
|
|
| MD5 |
cd921083bccbe455332fc14500842ed5
|
|
| BLAKE2b-256 |
44a3f2751a8ce56ca0cfca86df07b6c64e39e8313ecf0dcf5a2a925be8bca614
|
File details
Details for the file llm_tool_runtime-0.1.0-py3-none-any.whl.
File metadata
- Download URL: llm_tool_runtime-0.1.0-py3-none-any.whl
- Upload date:
- Size: 16.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.21 {"installer":{"name":"uv","version":"0.9.21","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
87da6ee80477b9c0392391a21897ab901f92d9cbff11ed2a9efae98dcab1ecf7
|
|
| MD5 |
fefd894c5c0de64c67920320ca4432fb
|
|
| BLAKE2b-256 |
1e00dece4895217b820f33a3e0d35fe726aab1056d6d27945cfcecdfbb7771d5
|