Skip to main content

Composable AI agent framework — drop-in LLM tool-calling, structured output, and SQLAlchemy integration for any Python project.

Project description

pygentix

A composable Python framework for building AI agents with tool-calling, structured output, and SQLAlchemy integration — across any LLM provider.

pip install pygentix                    # core only
pip install pygentix[ollama]            # + Ollama backend
pip install pygentix[openai]            # + OpenAI (ChatGPT) backend
pip install pygentix[gemini]            # + Google Gemini backend
pip install pygentix[all]               # every backend

Azure OpenAI / Copilot uses the openai package — install pygentix[openai].


Quick Start

Pick a backend, register tools, and start a conversation:

from pygentix import Ollama

agent = Ollama(model="qwen2.5:7b")            # runs locally — no API key needed

@agent.uses
def get_weather(city: str) -> str:
    """Return the current weather for a city."""
    return f"Sunny, 22 °C in {city}"

conv = agent.start_conversation()
response = conv.ask("What's the weather in Paris?")
print(response.message.content)
# → "It's sunny and 22 °C in Paris right now."

Every backend returns the same ChatResponse object, so switching providers is a one-line change:

from pygentix import ChatGPT, Gemini, Copilot

agent = ChatGPT(model="gpt-4o-mini")          # OpenAI
agent = Gemini(model="gemini-2.5-flash")       # Google
agent = Copilot(model="gpt-4o")               # Azure OpenAI

Backends

Class Provider Default model Install extra
Ollama Ollama (local) qwen2.5:7b ollama
ChatGPT OpenAI gpt-4o-mini openai
Gemini Google AI gemini-2.5-flash gemini
Copilot Azure OpenAI gpt-4o openai

API keys

Cloud backends read their key from the environment (or accept it in the constructor). Ollama runs locally and needs no key.

Backend Environment variable Constructor kwarg
Ollama (none — runs locally)
ChatGPT OPENAI_API_KEY api_key
Gemini GEMINI_API_KEY api_key
Copilot AZURE_OPENAI_API_KEY + AZURE_OPENAI_ENDPOINT api_key, endpoint
from pygentix import ChatGPT

agent = ChatGPT(api_key="sk-...")              # explicit
agent = ChatGPT()                              # reads OPENAI_API_KEY

Tool Calling

Decorate any Python function with @agent.uses to expose it as a tool the LLM can invoke:

from pygentix import Ollama

agent = Ollama()

@agent.uses
def search_docs(query: str) -> str:
    """Search the documentation for relevant articles."""
    return run_search(query)

@agent.uses
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to the specified address."""
    return mailer.send(to, subject, body)

conv = agent.start_conversation()
response = conv.ask("Find docs about authentication and email them to alice@co.com")

The framework introspects the function's signature and docstring to build the tool definition automatically. When the model decides to call a tool, the framework executes it and feeds the result back — looping until the model produces a final answer.

Parameterised @uses — serializer, description, name

@agent.uses also accepts optional keyword arguments so you can decorate an existing method (or a callable) without writing a thin wrapper around it:

from pygentix import ChatGPT

agent = ChatGPT()

@agent.uses(
    serializer=lambda rows: [r.model_dump(mode="json") for r in rows],
    name="get_open_tickets",
    description="Return every open support ticket, serialised as plain dicts.",
)
def fetch_open_tickets() -> list:
    return TicketRepo.query_open()
Kwarg Purpose
serializer Post-processes the return value before it reaches the LLM (e.g. ORM → JSON-safe dict).
description Overrides func.__doc__ in the tool schema the LLM sees.
name Overrides func.__name__ in the tool schema — handy for exposing repository methods under friendlier tool names.

All three forms are interchangeable:

@agent.uses                                        # bare decorator
@agent.uses(serializer=to_dicts, name="search")    # parameterised decorator
agent.uses(Repo.search, serializer=to_dicts)       # direct call

Vision / Image Understanding

Pass images alongside your question to any vision-capable model:

from pygentix import Ollama

agent = Ollama(model="llama3.2-vision")        # local vision model
conv = agent.start_conversation()

response = conv.ask("How many cats are in this photo?", images=["photo.jpeg"])
print(response.message.content)
# → "There are 3 cats in the photo."

The images parameter accepts a list of file paths and works across all backends:

Backend Vision model examples
Ollama llama3.2-vision, moondream
ChatGPT gpt-4o, gpt-4o-mini
Gemini gemini-2.5-flash, gemini-2.5-pro
Copilot gpt-4o (via Azure)

PDF Document Parsing

Combine a vision model with PyMuPDF to extract structured information from PDF documents:

import fitz  # PyMuPDF — pip install pymupdf
from pygentix import Ollama, OutputAgent

class PDFAgent(Ollama, OutputAgent):
    pass

agent = PDFAgent(model="llama3.2-vision")

@agent.output
class InvoiceData:
    company: str
    invoice_number: str
    total: str
    client: str

# Render the first page to an image
doc = fitz.open("invoice.pdf")
page = doc[0]
pix = page.get_pixmap(dpi=200)
pix.save("invoice_page.png")
doc.close()

conv = agent.start_conversation()
response = conv.ask(
    "Extract the company name, invoice number, total amount, "
    "and client name from this invoice.",
    images=["invoice_page.png"],
)

parsed = agent.parse_output(response)
print(parsed.company)         # "TechCorp Solutions"
print(parsed.invoice_number)  # "INV-2026-001"
print(parsed.total)           # "$5,454.00"
print(parsed.client)          # "Acme Industries"

This pattern works for receipts, contracts, reports — any PDF you can render to an image.


Populating a Database with Generated Data

Let the LLM generate realistic data and write it directly to your database:

from sqlalchemy import Column, Integer, String, Float, Date, create_engine
from sqlalchemy.orm import declarative_base

from pygentix import Ollama, SqlAlchemyAgent

Base = declarative_base()

class Sale(Base):
    __tablename__ = "sales"
    id = Column(Integer, primary_key=True)
    product = Column(String)
    amount = Column(Float)
    date = Column(Date)

engine = create_engine("sqlite:///sales.db")
Base.metadata.create_all(engine)

class SalesAgent(Ollama, SqlAlchemyAgent):
    pass

agent = SalesAgent(engine=engine)
agent.writes(Sale)   # grants the model insert access

conv = agent.start_conversation()
conv.ask("Create 10 sales records with realistic product names, amounts between $10 and $500, and dates in 2026.")

# Verify the rows were inserted
from sqlalchemy.orm import Session
with Session(engine) as s:
    count = s.query(Sale).count()
    print(f"{count} sales created")  # 10 sales created
    for sale in s.query(Sale).all():
        print(f"  {sale.product}: ${sale.amount} on {sale.date}")

The agent introspects the ORM model's columns and generates run_insert calls automatically — no manual SQL or fixture files needed.


Structured Output

Use OutputAgent to guarantee responses follow a JSON schema:

from pygentix import Ollama, OutputAgent

class MyAgent(Ollama, OutputAgent):
    pass

agent = MyAgent()

@agent.output
class Answer:
    answer: str
    confidence: float = 0.0
    sources: list = []

conv = agent.start_conversation()
response = conv.ask("What is the capital of France?")

parsed = agent.parse_output(response)
print(parsed.answer)       # "Paris"
print(parsed.confidence)   # 0.95

The schema can also be a raw dict — pass any valid JSON Schema to agent.output({"type": "object", ...}).


SQLAlchemy Integration

SqlAlchemyAgent gives the LLM read/write access to your database through auto-generated tools:

from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import declarative_base

from pygentix import Ollama, OutputAgent, SqlAlchemyAgent

Base = declarative_base()

class Product(Base):
    __tablename__ = "products"
    id = Column(Integer, primary_key=True)
    name = Column(String)
    price = Column(Integer)

engine = create_engine("sqlite:///shop.db")
Base.metadata.create_all(engine)

class ShopAgent(Ollama, SqlAlchemyAgent, OutputAgent):
    pass

agent = ShopAgent(engine=engine)
agent.reads(Product)                  # enables run_query
agent.writes(Product)                 # enables run_insert, run_update, run_delete

@agent.output
class Response:
    answer: str
    data: list = []

conv = agent.start_conversation()
conv.ask("Add a product called 'Widget' priced at 9.99")
response = conv.ask("List all products under $20")

parsed = agent.parse_output(response)
for item in parsed.data:
    print(item)

The agent automatically generates run_query, run_insert, run_update, and run_delete tools, handles type coercion (strings → ints, dates, etc.), and serialises results back to the model.


Row-Level Security

Ensure users can only access their own data — even when the LLM generates the queries. pygentix supports three complementary layers.

Direct Scope (automatic WHERE injection)

Map a column on the table to a key in the conversation's scope. All CRUD operations are automatically constrained:

from pygentix import Ollama, SqlAlchemyAgent

class MyAgent(Ollama, SqlAlchemyAgent):
    pass

agent = MyAgent(engine=engine)
agent.reads(User, scope={"id": "current_user"})
agent.writes(User, scope={"id": "current_user"})

# Alice's session — she can only see and modify her own row
conv = agent.start_conversation(scope={"current_user": 5})
conv.ask("Update my name to Alice Smith")
# → UPDATE users SET name='Alice Smith' WHERE id=5
# Attempts to access other users' rows are silently filtered out

Inserts auto-set the scoped column; updates and deletes inject it into the filter. If the LLM tries to target a different user, it gets a PermissionError.

Scope Chains (multi-level relationships)

When ownership is inferred through foreign keys (e.g. User → Sale → SaleItem), declare the chain:

agent.reads(SaleItem, scope_chain=[
    ("sale_id", Sale),              # SaleItem.sale_id → JOIN Sale
    ("user_id", "current_user"),    # WHERE Sale.user_id = scope["current_user"]
])
agent.writes(SaleItem, scope_chain=[
    ("sale_id", Sale),
    ("user_id", "current_user"),
])

conv = agent.start_conversation(scope={"current_user": 5})
conv.ask("List my sale items")
# → SELECT ... FROM sale_items JOIN sales ON ... WHERE sales.user_id = 5

Chains can be arbitrarily deep — each tuple is (fk_column, TargetModel) except the last which is (scope_column, scope_key):

agent.reads(LineDetail, scope_chain=[
    ("item_id", SaleItem),          # JOIN SaleItem
    ("sale_id", Sale),              # JOIN Sale
    ("user_id", "current_user"),    # WHERE Sale.user_id = ...
])

Policy Callbacks (general-purpose gate)

For authorization logic beyond SQL — API calls, custom rules, role checks — register a policy callback. It runs before every tool execution on any agent type:

def my_policy(tool_name: str, arguments: dict, scope: dict) -> bool:
    """Return False to block the tool call."""
    if tool_name == "run_delete" and scope.get("role") != "admin":
        return False  # only admins can delete
    return True

conv = agent.start_conversation(
    scope={"current_user": 5, "role": "viewer"},
    policy=my_policy,
)
conv.ask("Delete all records")
# → LLM receives "Permission denied: run_delete blocked by policy"

Combining Scope and Policy

When both are defined, both run. The policy gate executes first — if it denies, the tool never reaches the database. If it allows, the scope filters are applied as usual:

conv = agent.start_conversation(
    scope={"current_user": 5, "role": "editor"},
    policy=my_policy,       # checked first
)
# 1. Policy: is this tool allowed for this role? ✓
# 2. Scope:  constrain query to current_user's data ✓

Scope-Aware Tools (works on any @uses)

SqlAlchemyAgent isn't the only thing that understands scope. Every tool registered with @agent.uses participates too: any parameter whose name matches a key in the active conversation scope is auto-filled at call time and hidden from the LLM-visible tool schema. The model literally cannot supply those arguments, so it cannot widen access by crafting different inputs.

from pygentix import ChatGPT

agent = ChatGPT()

@agent.uses(serializer=dump_list)
def get_my_opportunities(enterprise_id: str) -> list:
    """Return every opportunity in the logged-in user's enterprise."""
    return Opportunities.query_by_enterprise(enterprise_id)

conv = agent.start_conversation(scope={
    "user_id": "u_42",
    "enterprise_id": "ent_7",
    "role": "sales",
})
conv.ask("Show me my pipeline")
# Framework calls get_my_opportunities(enterprise_id="ent_7").
# The LLM sees get_my_opportunities() as a zero-argument tool.

Parameters not in scope remain visible to the LLM as normal arguments, so you can mix LLM-supplied and scope-supplied parameters freely:

@agent.uses
def search_my_tickets(enterprise_id: str, keyword: str) -> list:
    """Search tickets in the caller's enterprise by keyword."""
    return TicketRepo.search(enterprise_id, keyword)

# LLM sees: search_my_tickets(keyword: str)
# Framework injects enterprise_id from scope at call time.

This is an adapter / security boundary — the scope dict built from the authenticated session is the only place scoped values ever come from.

No scope = unrestricted (backward compatible)

If you don't pass scope or policy, everything works exactly as before — no filters, no restrictions.


Named Agents & Lazy Tool Registration

Large codebases often want to declare a tool next to the function it exposes (e.g. on a repository / model class) without centralising every registration in a single bootstrap module. Two library pieces make that safe regardless of import order:

  1. Construct the agent with a name=. The instance registers itself in Agent.registry under that name.
  2. Reference it from anywhere via Agent.by_name(...), which returns a lazy AgentRef. Its .uses(...) is the same decorator as Agent.uses — if the named agent already exists the registration runs immediately, otherwise it's queued and flushed as soon as an agent with that name is constructed.
# app/core/agent.py — built during startup
from pygentix import ChatGPT

agent = ChatGPT(name="CRMAgent", api_key=...)
# app/models/opportunity.py — declares its scoped tool in place
from pygentix import Agent

class Opportunities(Base):
    ...

    @staticmethod
    @Agent.by_name("CRMAgent").uses(
        serializer=dump_list,
        name="get_my_opportunities",
        description="Return every opportunity in the caller's enterprise.",
    )
    def get_all_by_enterprise_id(enterprise_id: str) -> list:
        ...
  • Either module can be imported first. If models load before the agent is constructed, their registrations sit on Agent.pending_uses["CRMAgent"] and flush on construction.
  • Agent.by_name(...) accepts exactly the same serializer / description / name kwargs as @agent.uses, because AgentRef.uses is Agent.uses (shared via attribute aliasing — not a reimplementation).
  • Names must be unique: constructing two agents with the same name= raises ValueError.

When combined with scope-aware tools (see above), this pattern lets each model own its scoped tools while the endpoint only wires up authentication and starts the conversation:

# app/api/chat.py
from pygentix import Agent
from app.core.agent import agent   # makes sure CRMAgent exists
import app.models                  # triggers all @Agent.by_name("CRMAgent").uses decorators

@router.post("/chat")
def chat(..., current_user = Depends(verify_token)):
    scope = {"user_id": current_user.id, "enterprise_id": current_user.enterprise_id}
    conv = agent.start_conversation(scope=scope)
    ...

Task Scheduling

SchedulerAgent lets the LLM schedule tool calls and conversations for future execution — using natural language like "send this email tomorrow" or "check sales every Monday at 9am".

pip install pygentix[scheduler]   # adds croniter for cron expressions

Basic setup

from pygentix import Ollama, SchedulerAgent

class MyAgent(Ollama, SchedulerAgent):
    pass

agent = MyAgent(schedule_file="tasks.json", poll_interval=10)

@agent.uses
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email."""
    return f"Email sent to {to}"

agent.start_scheduler()

conv = agent.start_conversation()
conv.ask("Send an email to alice@co.com saying hello tomorrow at 9am")
# LLM calls schedule_task("send_email", {"to": "alice@co.com", ...}, run_at="2026-04-04T09:00:00")

The scheduler auto-registers four LLM tools: schedule_task, schedule_conversation, list_scheduled_tasks, and cancel_scheduled_task. No manual tool wiring needed.

One-shot vs. recurring

# One-shot — direct tool call at a specific time
conv.ask("Send the report email at 5pm today")
# LLM calls: schedule_task("send_email", {...}, run_at="2026-04-03T17:00:00")

# Recurring — cron expression
conv.ask("Every Monday at 9am, send a summary email to the team")
# LLM calls: schedule_task("send_email", {...}, cron="0 9 * * 1")

Conversation replay (deferred reasoning)

When the LLM doesn't have all the data yet, it can schedule a future conversation instead of a direct call. At execution time, a fresh conversation runs so the LLM can reason about current data:

conv.ask("Every Friday at 6pm, check the latest sales numbers and email a report to bob@co.com")
# LLM calls: schedule_conversation("check latest sales and email report to bob@co.com", cron="0 18 * * 5")

Manual tick

For cron-based setups or testing, call tick() directly:

results = agent.tick()  # executes all due tasks right now

Task persistence

All scheduled tasks are persisted to a JSON file (default: scheduled_tasks.json). The file is human-readable and can be inspected or edited manually:

[
  {
    "id": "a1b2c3",
    "type": "tool_call",
    "function_name": "send_email",
    "arguments": {"to": "alice@co.com", "subject": "Hello"},
    "run_at": "2026-04-04T09:00:00",
    "cron": null,
    "status": "pending"
  }
]

Missed tasks

One-shot tasks whose run_at has passed while the process was down are marked as "missed" — they are not retried on startup. Recurring cron tasks simply wait for their next occurrence.

Lifecycle

agent.start_scheduler()   # start background polling thread
# ... application runs ...
agent.stop_scheduler()    # stop polling, join thread

Mixing Backends

Every agent is a composable mixin — swap the backend class and everything else stays the same:

from pygentix import Ollama, ChatGPT, Gemini, Copilot, SqlAlchemyAgent, OutputAgent

class LocalAgent(Ollama, SqlAlchemyAgent, OutputAgent):
    """Runs entirely on your machine via Ollama."""

class CloudAgent(ChatGPT, SqlAlchemyAgent, OutputAgent):
    """Uses OpenAI for inference."""

class GoogleAgent(Gemini, SqlAlchemyAgent, OutputAgent):
    """Uses Google Gemini for inference."""

class EnterpriseAgent(Copilot, SqlAlchemyAgent, OutputAgent):
    """Routes through your Azure OpenAI deployment."""

Multi-turn Conversations

A Conversation maintains the full message history, so follow-up questions have context:

from pygentix import Ollama, SqlAlchemyAgent

# ... define models, engine, etc.

agent = Ollama(engine=engine)
conv = agent.start_conversation()
conv.ask("Create a user named Alice with email alice@example.com")
conv.ask("Now create one for Bob at bob@example.com")
response = conv.ask("List all users")

Streaming Responses

Stream tokens as they arrive instead of waiting for the full response:

from pygentix import Ollama

agent = Ollama()
conv = agent.start_conversation()

for chunk in conv.ask_stream("Tell me a story about a robot"):
    print(chunk, end="", flush=True)

When tools are registered, the tool-call loop runs normally and the final answer is streamed. Every backend supports streaming natively (Ollama, OpenAI, Gemini, Azure).


Async Support

Use ask_async in async frameworks like FastAPI, Starlette, or Django:

import asyncio
from pygentix import Ollama

agent = Ollama()
conv = agent.start_conversation()

async def main():
    response = await conv.ask_async("What is the capital of France?")
    print(response.message.content)

asyncio.run(main())

By default, chat_async runs the sync method in a thread pool via asyncio.to_thread. Backends can override with native async clients for lower overhead.


MockAgent for Testing

Unit-test your application without hitting a real LLM:

from pygentix.testing import MockAgent

agent = MockAgent(responses=["Hello!", "Goodbye!"])
conv = agent.start_conversation()

r1 = conv.ask("Hi")     # → "Hello!"
r2 = conv.ask("Bye")    # → "Goodbye!"

MockAgent also supports tool-call simulation and usage metadata:

agent = MockAgent(responses=[
    {"tool_calls": [{"name": "get_weather", "arguments": {"city": "Paris"}}]},
    "It's sunny in Paris!",
])

@agent.uses
def get_weather(city: str) -> str:
    """Get weather."""
    return f"22°C in {city}"

conv = agent.start_conversation()
resp = conv.ask("Weather?")  # tool executes, then returns "It's sunny in Paris!"

Event Hooks

Register callbacks to observe tool calls, results, and LLM responses in real-time:

from pygentix import Ollama

agent = Ollama()

agent.on("tool_call", lambda name, args: print(f"→ Calling {name}({args})"))
agent.on("tool_result", lambda name, result: print(f"← {name} returned: {result}"))
agent.on("response", lambda resp: print(f"LLM: {resp.message.content[:80]}"))

@agent.uses
def search(query: str) -> str:
    """Search the web."""
    return f"Results for '{query}'..."

conv = agent.start_conversation()
conv.ask("Search for pygentix")
# → Calling search({'query': 'pygentix'})
# ← search returned: Results for 'pygentix'...
# LLM: Here are the search results for pygentix...

Hooks are ideal for logging, metrics, audit trails, and content filtering.


Conversation Save & Load

Persist conversations to JSON and restore them later:

from pygentix import Ollama
from pygentix.core import Conversation

agent = Ollama()
conv = agent.start_conversation()
conv.ask("My name is Alice")

# Save
json_str = conv.to_json()
# or: data = conv.to_dict()

# ... later, in another process ...
restored = Conversation.from_json(agent, json_str)
resp = restored.ask("What's my name?")
# → "Your name is Alice."

Token Usage Tracking

Every ChatResponse includes token counts when the backend reports them:

from pygentix import Ollama

agent = Ollama()
conv = agent.start_conversation()
response = conv.ask("Explain quantum computing in one sentence")

print(response.usage.prompt_tokens)      # e.g. 42
print(response.usage.completion_tokens)  # e.g. 28
print(response.usage.total_tokens)       # e.g. 70

All four backends (Ollama, ChatGPT, Gemini, Copilot) populate usage automatically.


Retry with Exponential Backoff

Transient API errors (rate-limits, timeouts, 500s) are retried automatically with exponential backoff:

from pygentix import ChatGPT

agent = ChatGPT(max_retries=5, retry_delay=2.0)

Retries apply to connection errors, timeouts, and HTTP status codes 429, 500, 502, 503, 504. Non-retriable errors (auth failures, validation) are raised immediately. The delay doubles after each attempt (2s → 4s → 8s → ...).


Context Window Management

Prevent conversations from exceeding the model's context window by setting max_history:

from pygentix import Ollama

agent = Ollama()
conv = agent.start_conversation(max_history=20)

# After many turns, only the system prompt + last 20 messages are kept
for i in range(100):
    conv.ask(f"Question {i}")

len(conv.messages)  # ≤ 22 (system + 20 + current response)

The system prompt is always preserved. Oldest messages are dropped first.


Structured Logging

pygentix uses Python's standard logging module under the "pygentix" logger. Enable it to see every message, tool call, and response:

import logging

logging.basicConfig(level=logging.DEBUG)
logging.getLogger("pygentix").setLevel(logging.DEBUG)

# Now every conv.ask() logs:
# INFO  pygentix: User: What's the weather?
# DEBUG pygentix: Calling tool get_weather({'city': 'Paris'})
# DEBUG pygentix: Tool get_weather → Sunny, 22°C
# INFO  pygentix: Assistant: It's sunny and 22°C in Paris.

Set to WARNING in production to silence informational logs.


API Reference

Core

Symbol Description
Agent Abstract base class — subclass to create a backend
Agent.by_name(name) Lazy handle (AgentRef) for registering tools against a named agent from anywhere
Agent.registry Mapping of name → live Agent instance, populated when name= is passed to the constructor
ChatResponse Normalized response every backend returns
Conversation Multi-turn conversation with save/load, streaming, async
Function Introspectable wrapper around a tool callable; supports optional serializer, description, name overrides
Usage Token usage statistics (prompt, completion, total)

Backends

Symbol Description
Ollama Local inference via Ollama
ChatGPT OpenAI Chat Completions
Gemini Google Gemini (via google-genai)
Copilot Azure OpenAI

Mixins & Utilities

Symbol Description
OutputAgent JSON schema enforcement for responses
SchedulerAgent Schedule tool calls and conversations for future execution
SqlAlchemyAgent Database CRUD tools from ORM models
MockAgent Fake backend for unit testing (pygentix.testing)

Development

git clone https://github.com/andreperussi/pygentix.git
cd pygentix
pip install -e ".[dev]"
pytest

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pygentix-0.1.5.tar.gz (67.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pygentix-0.1.5-py3-none-any.whl (46.1 kB view details)

Uploaded Python 3

File details

Details for the file pygentix-0.1.5.tar.gz.

File metadata

  • Download URL: pygentix-0.1.5.tar.gz
  • Upload date:
  • Size: 67.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for pygentix-0.1.5.tar.gz
Algorithm Hash digest
SHA256 06eb36b8809ec174a77b31c5726457b44e32ef7bfd1ee2a03d7f879ee6d4eed8
MD5 3922d55d31ee194d7f5d08231a7f223f
BLAKE2b-256 456c70ea9b3346c559b84a6affa983f5b3e0362d8e2d2f308eb9f166e4b4276b

See more details on using hashes here.

File details

Details for the file pygentix-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: pygentix-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 46.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for pygentix-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 02c9eea50fbd155e7302c96843ad0c2e10a3792f16c8afcc0f47be0716d93d90
MD5 6691f1eb5ffe43f3f54b1ea04ea28ab1
BLAKE2b-256 a5f6298499180c9b163c159d7bf9b2b699b9be2a7c1f717166e27ce63d5387f9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page