python agent for inference handling
Project description
org.slashlib.py.agent
A highly decoupled, asynchronous framework for building AI agents in Python.
Core Concept
This package provides a robust infrastructure to connect AI models (Inference Engines) with functional tools. The focus lies on Provider Agnosticism: The agent does not need to know whether it is communicating with Ollama, OpenAI, or a local model—it uses standardized adapters to ensure seamless integration.
Key Features
- Asynchronous Core: Built on
asynciofor non-blocking task execution. - Provider Agnostic: Easily swap the AI engine using the Adapter pattern.
- Automatic Tool Schemas: Automatically transforms Python functions into JSON schemas for LLMs via decorators.
- Multiton Pattern: Ensures unique agent instances by identifier, preventing redundant resource allocation.
- Robust Exception Hierarchy: Clearly separates connection, configuration, and tool execution errors.
- Plugin System: Discover and load inference adapters dynamically via Python entry points.
Installation
Install the package via pip:
pip install org.slashlib.py.agent
Configuration
The framework can automatically ingest default settings from a pyproject.json file located in your project root. This allows you to manage model parameters without changing your code.
pyproject.json:
{
"name": "My App",
"version": "0.0.1",
"paths": {
"ROOT": "{ROOT}",
"assets": "{ROOT}/assets"
},
"assets": {
"logging": "{path.assets}/logging.json"
},
"plugins": {
"my-inference-adapter": {
"default-setting": "foo"
}
}
}
Quick Start
Setting up an agent with a tool and the Ollama adapter is straightforward:
import asyncio
from org.slashlib.py.inference.ollama import OllamaInferenceAdapter
from org.slashlib.py.agent import Agent, tool
# 1. Define a tool
@tool(description="Adds two numbers.")
async def add_numbers(a: int, b: int) -> int:
return a + b
async def main():
# 2. Configure Adapter and Agent
adapter = OllamaInferenceAdapter()
my_agent = Agent(
identifier="MathExpert",
tools=[add_numbers],
adapter=adapter
)
# 3. Start task and retrieve result
response = await my_agent.run(user_prompt="What is 123 + 456?")
print(f"Response: {response.get_last_context()}")
if __name__ == "__main__":
asyncio.run(main())
The @tool Decorator
The @tool decorator is the bridge between standard Python functions and AI logic. It transforms a function into an instance of the Tool class. This class automatically generates the metadata and JSON schemas required by Large Language Models (LLMs) like Gemma or Llama to understand how to interact with your code.
Key Features
- Schema Generation: Automatically extracts the tool name, description (from docstrings), and parameter types (from type hints).
- Execution Wrapper: Handles both synchronous and asynchronous functions, ensuring results are formatted as strings or JSON suitable for LLM context.
- Type Mapping: Maps Python types (int, str, list, etc.) to their corresponding JSON schema types.
Critical Usage Rules
- Always Use Parentheses: The decorator must be called with parentheses:
@tool(). This ensures the function is wrapped into aToolinstance rather than remaining a raw function. - Type Hints are Mandatory: Use Python type hints (e.g.,
a: int,names: list). These are used to build the "parameters" section of the JSON schema. - Docstrings Matter: The function's docstring is used as the tool's description. Be precise, as this is the "manual" the AI reads to decide when to use the tool.
Example: A Mathematical Tool
from org.slashlib.py.agent import tool
@tool()
def add_numbers(a: int, b: int) -> int:
"""
Adds two integers together and returns the sum.
Args:
a (int): The first number.
b (int): The second number.
Returns:
int: The sum of a and b.
"""
return a + b
# The function 'add_numbers' is now a Tool object and can be
# passed directly to an Agent:
# my_agent = Agent(..., tools=[add_numbers])
Custom Configuration
You can explicitly override the tool's name or description within the decorator if the function name or docstring isn't descriptive enough for the AI:
@tool(name="global_calculator", description="Use this for any addition tasks.")
def add(a: int, b: int):
return a + b
Async Execution
The framework is built on Python's asyncio. The Agent.run method is an asynchronous coroutine. This allows your application to remain responsive or handle multiple agents concurrently.
Correct Async Flow
import asyncio
from org.slashlib.py.agent import Agent
async def main():
# 1. Initialize Agent (via plugin or direct)
my_agent = Agent.from_plugin(
identifier="MathExpert",
tools=[add_numbers],
plugin_name="ollama-inference-adapter"
)
# 2. Execute run (awaits the completion of the inference cycle)
# The method returns an AgentResponse object directly.
response = await my_agent.run(
user_prompt="What is 123 + 456?",
model="gemma4"
)
# 3. Handle the result
if not response.has_error:
print(f"Response: {response.response}")
else:
response.raise_errors()
if __name__ == "__main__":
asyncio.run(main())
Understanding AgentResponse
The Agent.run() method does not return a simple string, but an AgentResponse object. This container manages the execution results, the conversation history (context), and any errors that may have occurred.
Key Properties & Methods
.response: Returns the final text response from the assistant. Use this to get the AI's answer..has_error: A boolean flag indicating if a fatal process error (e.g., connection issues, missing models) occurred..raise_errors(): A helper method that raises the first recorded process error as an exception. Useful for debugging failed runs..context_size: Returns the number of messages exchanged in the current run..get_context(): Returns the full conversation history (messages, tool calls, and results).
Example: Handling the Response
response = await my_agent.run(user_prompt="Calculate 123 + 456", model="gemma4")
if response.has_error:
print(f"An error occurred!")
response.raise_errors()
else:
print(f"Assistant says: {response.response}")
print(f"Conversation steps: {response.context_size}")
#### Process Errors vs. Tool Errors
The AgentResponse distinguishes between two types of issues to give you fine-grained control over error handling:
- Process Errors (
.has_error/._errors): These are fatal errors that prevented the Agent from completing its task (e.g., connection loss to Ollama, model not found, or authentication issues). - Tool Errors (
.has_tool_error/.get_tool_errors()): These are non-fatal errors that occurred inside a specific tool execution. The Agent might still be able to provide a final response even if one or more tool calls failed.
Pro-Tip: If you want your application to be extra robust, always check both:
response = await my_agent.run(...)
if response.has_error:
# Fatal: The agent couldn't finish
response.raise_errors()
if response.has_tool_error:
# Non-fatal: One or more tools failed, but the agent still replied
for err in response.get_tool_errors():
print(f"Warning: Tool execution failed: {err}")
Plugin Discovery & Dynamic Loading
The framework utilizes Python's entry points to support dynamic discovery and loading of inference adapters. This allows you to extend the agent's capabilities with external adapters without modifying the core package.
Using Agent.from_plugin
To use an external adapter, ensure the corresponding plugin package is installed in your environment. You can then instantiate an agent using the discovery interface:
from org.slashlib.py.agent import Agent
# 1. List all discovered inference adapter plugins
# This scans the environment for registered 'org.slashlib.py.inference.adapter' entry points.
available = Agent.list_plugins()
print(f"Available adapters: {available}")
# 2. Create an agent using a specific plugin
# In this example, we load the Ollama adapter dynamically.
my_agent = Agent.from_plugin(
identifier="my-dynamic-agent",
tools=[my_tool], # Must be a list of @tool() objects
plugin_name="ollama-inference-adapter", # The name registered in entry_points
adapter_kwargs={ # Arguments passed directly to the Adapter __init__
"base_url": "http://localhost:11434"
},
multi=True # Optional: Agent-specific keyword arguments
)
Why use Discovery?
-
Decoupling: The core agent logic remains independent of specific LLM providers (Ollama, OpenAI, etc.).
-
Extensibility: Simply install a new adapter package, and it becomes immediately available via list_plugins().
-
Flexible Config: adapter_kwargs allows for provider-specific configuration (like API keys or base URLs) while keeping the Agent initialization clean.
Documentation & Obsidian
The project root is pre-configured as an Obsidian Vault. If you open this folder directly in Obsidian, all settings and documentation links will be available immediately via the included .obsidian directory.
The following community plugins are pre-configured in the vault to enhance the documentation experience:
- File Include: Embed code files directly into your markdown documentation.
- Folder Notes: Add descriptions at the folder level.
- Front Matter Title: Use metadata for descriptive file titles.
- Hide Folders: Keeps the structure clean by hiding internal directories.
- Iconic & Icons: Improved visual navigation.
License
This project is licensed under the MIT License - see the LICENSE file for details.
© 2026 org.slashlib
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file org_slashlib_py_agent-0.1.6.tar.gz.
File metadata
- Download URL: org_slashlib_py_agent-0.1.6.tar.gz
- Upload date:
- Size: 21.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c296b8a3a9a65728a5a482c12317b03d158ebf6bb54b9002335dc3dc52738857
|
|
| MD5 |
226184b0720fb8bdb2de5e38a8a6f3eb
|
|
| BLAKE2b-256 |
7a80fa1596c784216d0897b9948c67db846a0b205092094b678e8e7a66f2de0d
|
File details
Details for the file org_slashlib_py_agent-0.1.6-py3-none-any.whl.
File metadata
- Download URL: org_slashlib_py_agent-0.1.6-py3-none-any.whl
- Upload date:
- Size: 21.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4d328d40589eae388b030ae78a88f87ad744867dc1d9cdbf714f45a782ce0ff4
|
|
| MD5 |
b4ba93f30089ab1dca244beaac6ac15f
|
|
| BLAKE2b-256 |
12eaabe99e0f512b45a0bc2f73e8be6ed0d15a42059a6835c2b58368c92ceb75
|