Composable AI agent framework — drop-in LLM tool-calling, structured output, and SQLAlchemy integration for any Python project.
Project description
pygentix
A composable Python framework for building AI agents with tool-calling, structured output, and SQLAlchemy integration — across any LLM provider.
pip install pygentix # core only
pip install pygentix[ollama] # + Ollama backend
pip install pygentix[openai] # + OpenAI (ChatGPT) backend
pip install pygentix[gemini] # + Google Gemini backend
pip install pygentix[all] # every backend
Azure OpenAI / Copilot uses the
openaipackage — installpygentix[openai].
Quick Start
Pick a backend, register tools, and start a conversation:
from pygentix import Ollama
agent = Ollama(model="qwen2.5:7b") # runs locally — no API key needed
@agent.uses
def get_weather(city: str) -> str:
"""Return the current weather for a city."""
return f"Sunny, 22 °C in {city}"
conv = agent.start_conversation()
response = conv.ask("What's the weather in Paris?")
print(response.message.content)
# → "It's sunny and 22 °C in Paris right now."
Every backend returns the same ChatResponse object, so switching providers is a one-line change:
from pygentix import ChatGPT, Gemini, Copilot
agent = ChatGPT(model="gpt-4o-mini") # OpenAI
agent = Gemini(model="gemini-2.5-flash") # Google
agent = Copilot(model="gpt-4o") # Azure OpenAI
Backends
| Class | Provider | Default model | Install extra |
|---|---|---|---|
Ollama |
Ollama (local) | qwen2.5:7b |
ollama |
ChatGPT |
OpenAI | gpt-4o-mini |
openai |
Gemini |
Google AI | gemini-2.5-flash |
gemini |
Copilot |
Azure OpenAI | gpt-4o |
openai |
API keys
Cloud backends read their key from the environment (or accept it in the constructor). Ollama runs locally and needs no key.
| Backend | Environment variable | Constructor kwarg |
|---|---|---|
Ollama |
(none — runs locally) | — |
ChatGPT |
OPENAI_API_KEY |
api_key |
Gemini |
GEMINI_API_KEY |
api_key |
Copilot |
AZURE_OPENAI_API_KEY + AZURE_OPENAI_ENDPOINT |
api_key, endpoint |
from pygentix import ChatGPT
agent = ChatGPT(api_key="sk-...") # explicit
agent = ChatGPT() # reads OPENAI_API_KEY
Tool Calling
Decorate any Python function with @agent.uses to expose it as a tool the LLM can invoke:
from pygentix import Ollama
agent = Ollama()
@agent.uses
def search_docs(query: str) -> str:
"""Search the documentation for relevant articles."""
return run_search(query)
@agent.uses
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to the specified address."""
return mailer.send(to, subject, body)
conv = agent.start_conversation()
response = conv.ask("Find docs about authentication and email them to alice@co.com")
The framework introspects the function's signature and docstring to build the tool definition automatically. When the model decides to call a tool, the framework executes it and feeds the result back — looping until the model produces a final answer.
Vision / Image Understanding
Pass images alongside your question to any vision-capable model:
from pygentix import Ollama
agent = Ollama(model="llama3.2-vision") # local vision model
conv = agent.start_conversation()
response = conv.ask("How many cats are in this photo?", images=["photo.jpeg"])
print(response.message.content)
# → "There are 3 cats in the photo."
The images parameter accepts a list of file paths and works across all backends:
| Backend | Vision model examples |
|---|---|
Ollama |
llama3.2-vision, moondream |
ChatGPT |
gpt-4o, gpt-4o-mini |
Gemini |
gemini-2.5-flash, gemini-2.5-pro |
Copilot |
gpt-4o (via Azure) |
Structured Output
Use OutputAgent to guarantee responses follow a JSON schema:
from pygentix import Ollama, OutputAgent
class MyAgent(Ollama, OutputAgent):
pass
agent = MyAgent()
@agent.output
class Answer:
answer: str
confidence: float = 0.0
sources: list = []
conv = agent.start_conversation()
response = conv.ask("What is the capital of France?")
parsed = agent.parse_output(response)
print(parsed.answer) # "Paris"
print(parsed.confidence) # 0.95
The schema can also be a raw dict — pass any valid JSON Schema to agent.output({"type": "object", ...}).
SQLAlchemy Integration
SqlAlchemyAgent gives the LLM read/write access to your database through auto-generated tools:
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import declarative_base
from pygentix import Ollama, OutputAgent, SqlAlchemyAgent
Base = declarative_base()
class Product(Base):
__tablename__ = "products"
id = Column(Integer, primary_key=True)
name = Column(String)
price = Column(Integer)
engine = create_engine("sqlite:///shop.db")
Base.metadata.create_all(engine)
class ShopAgent(Ollama, SqlAlchemyAgent, OutputAgent):
pass
agent = ShopAgent(engine=engine)
agent.reads(Product) # enables run_query
agent.writes(Product) # enables run_insert, run_update, run_delete
@agent.output
class Response:
answer: str
data: list = []
conv = agent.start_conversation()
conv.ask("Add a product called 'Widget' priced at 9.99")
response = conv.ask("List all products under $20")
parsed = agent.parse_output(response)
for item in parsed.data:
print(item)
The agent automatically generates run_query, run_insert, run_update, and run_delete tools, handles type coercion (strings → ints, dates, etc.), and serialises results back to the model.
Mixing Backends
Every agent is a composable mixin — swap the backend class and everything else stays the same:
from pygentix import Ollama, ChatGPT, Gemini, Copilot, SqlAlchemyAgent, OutputAgent
class LocalAgent(Ollama, SqlAlchemyAgent, OutputAgent):
"""Runs entirely on your machine via Ollama."""
class CloudAgent(ChatGPT, SqlAlchemyAgent, OutputAgent):
"""Uses OpenAI for inference."""
class GoogleAgent(Gemini, SqlAlchemyAgent, OutputAgent):
"""Uses Google Gemini for inference."""
class EnterpriseAgent(Copilot, SqlAlchemyAgent, OutputAgent):
"""Routes through your Azure OpenAI deployment."""
Multi-turn Conversations
A Conversation maintains the full message history, so follow-up questions have context:
from pygentix import Ollama, SqlAlchemyAgent
# ... define models, engine, etc.
agent = Ollama(engine=engine)
conv = agent.start_conversation()
conv.ask("Create a user named Alice with email alice@example.com")
conv.ask("Now create one for Bob at bob@example.com")
response = conv.ask("List all users")
API Reference
Core
| Symbol | Description |
|---|---|
Agent |
Abstract base class — subclass to create a backend |
ChatResponse |
Normalized response every backend returns |
Conversation |
Multi-turn conversation manager |
Function |
Introspectable wrapper around a tool callable |
Backends
| Symbol | Description |
|---|---|
Ollama |
Local inference via Ollama |
ChatGPT |
OpenAI Chat Completions |
Gemini |
Google Gemini (via google-genai) |
Copilot |
Azure OpenAI |
Mixins
| Symbol | Description |
|---|---|
OutputAgent |
JSON schema enforcement for responses |
SqlAlchemyAgent |
Database CRUD tools from ORM models |
Development
git clone https://github.com/andreperussi/pygentix.git
cd pygentix
pip install -e ".[dev]"
pytest
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pygentix-0.1.1.tar.gz.
File metadata
- Download URL: pygentix-0.1.1.tar.gz
- Upload date:
- Size: 26.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
427c5f2eb671caed34c9b7ddba3e9c3505d7d6fb86b8a3b107950513faecc95f
|
|
| MD5 |
d994c732e863be3b253df609c849fb74
|
|
| BLAKE2b-256 |
ba9b23e6495f5995e005ecfdd12f2c1d3bddc67df187e1c875b2b3f82badd390
|
File details
Details for the file pygentix-0.1.1-py3-none-any.whl.
File metadata
- Download URL: pygentix-0.1.1-py3-none-any.whl
- Upload date:
- Size: 20.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4fd090952a67d222b5d8b9db4bcc42423e3f771938be086f2d8e608927198117
|
|
| MD5 |
63e0d514285dda7c4c908ad64b4d053c
|
|
| BLAKE2b-256 |
e2660704701a5dd8f493949dad6b7a32408c5a0496c9c0ef6b61a1ec02ba4e0d
|