Skip to main content

Pydantic AI integration for Wren AI Core — attach a CLI-prepared Wren project to your agent in three lines.

Project description

wren-pydantic

Pydantic AI integration for Wren AI Core.

Attach a CLI-prepared Wren project to a Pydantic AI agent in three lines:

from wren_pydantic import WrenToolkit
from pydantic_ai import Agent

toolkit = WrenToolkit.from_project("./analytics_db")
agent = Agent(
    "openai:gpt-4o",
    instructions=toolkit.instructions(),
    toolsets=[toolkit.toolset()],
)
result = agent.run_sync("How many enterprise customers do we have?")
print(result.output)

⚠️ Wren CLI required first. This SDK is a thin adapter over a Wren project that the wren CLI has already prepared (profile + MDL + optional memory index). Follow the install guide before installing this package.

Runnable demos:

Prerequisites

This package assumes you have already used the Wren CLI to prepare a project:

wren profile add my_project --datasource duckdb   # or mysql, postgres, ...
wren context init
wren context set-profile my_project               # binds profile to project
wren context build                                # produces target/mdl.json
wren memory index                                 # optional but recommended

If you haven't installed the CLI yet, install wren-engine first:

pip install "wren-engine[memory,postgres]"

Installation

wren-pydantic exposes datasource and memory extras that pass through to the matching wren-engine extras, so you only have to install once:

# Match the datasource your wren_project.yml uses (DuckDB needs no extra):
pip install "wren-pydantic[mysql]"
pip install "wren-pydantic[postgres,memory]"
pip install "wren-pydantic[bigquery,memory]"

# Available datasource extras: postgres, mysql, bigquery, snowflake,
# clickhouse, trino, mssql, databricks, redshift, spark, athena, oracle.

# `memory` extra enables the three memory tools (wren_fetch_context,
# wren_recall_queries, wren_store_query). Without it the toolkit exposes
# only the three runtime tools.

# Install everything for experimentation:
pip install "wren-pydantic[all,memory]"

If wren-engine is already installed (e.g. you use the CLI), the bare pip install wren-pydantic is enough — your existing extras carry over.

What you get

WrenToolkit.from_project(path) exposes:

  • 6 LLM-facing tools (3 runtime + 3 memory when .wren/memory/ exists):
    • wren_query — execute SQL through Wren's semantic layer, returns a WrenQueryResult (typed Pydantic model)
    • wren_dry_plan — plan SQL without execution; verifies it targets MDL models correctly
    • wren_list_models — list project models with column counts and descriptions
    • wren_fetch_context — retrieve schema/business context for a question
    • wren_recall_queries — surface similar past NL→SQL pairs as few-shot examples
    • wren_store_query — persist a confirmed NL→SQL pair for future recall (retries=0 — write failures don't loop)
  • Direct Python API (sync; no async wrappers — see docs/core/sdk/pydantic.md for why):
    toolkit.query("SELECT ...")              # → pyarrow.Table
    toolkit.dry_plan("SELECT ...")            # → str (target-dialect SQL)
    toolkit.dry_run("SELECT ...")             # → None (validates without exec)
    toolkit.memory.fetch("revenue trends")
    toolkit.memory.recall("top customers")
    toolkit.memory.store(nl="...", sql="...", tags=["..."])
    
  • toolkit.instructions() — Pydantic-AI-aware instructions string that adapts to enabled tools and includes your project's instructions.md when present.

Errors from the engine are converted into Pydantic AI's ModelRetry with phase-aware framing — the agent can self-correct on SQL or metadata errors. Infrastructure errors (connection failures, missing DuckDB files) propagate as WrenError for outer code to handle.

Configuration

WrenToolkit.from_project(
    path,                # required — path to your prepared Wren project
    profile="prod",      # optional — picks a named profile (default: active)
)

toolkit.toolset(
    include_memory_write=True,   # set False to keep memory read-only
    takes_ctx=False,             # set True if mixing with deps_type= tools
)

toolkit.instructions(toolset=toolset)  # pass same toolset for prompt sync

Memory is auto-detected from the project: present <path>/.wren/memory/ exposes the 3 memory tools alongside the 3 runtime tools; absent → only the runtime tools. To enable, run wren memory index from the project root; to disable, delete the directory. There is no override kwarg.

include_memory_write=False removes wren_store_query from the toolset while keeping wren_fetch_context and wren_recall_queries. Use this for shared / curated memory stores.

takes_ctx=True exposes ctx: RunContext as the first parameter of every tool. Use this when mixing wren tools with your own deps_type=-typed tools in the same agent. The context is ignored internally — the toolkit already captures its own state.

Compatibility matrix

wren-pydantic wren-engine pydantic-ai
0.1.x >= 0.5.0 >= 1.0, < 2.0

Known limitations (v0.1)

  • Sync direct API only. aquery / adry_plan etc. are not provided — Pydantic AI auto-bridges sync tools to its async run loop, and the underlying WrenEngine is sync I/O so an async wrapper would be fake-async with no real concurrency benefit. Revisit when Core ships an async-native engine.
  • One toolkit per agent. If you need to query multiple Wren projects, build separate toolkits + agents and federate in Python.
  • Memory is auto-detected from .wren/memory/ and there is no kwarg to override. To enable, run wren memory index; to disable, delete the directory.
  • No hot reload mechanism. target/mdl.json is re-read on every tool call so wren context build updates are picked up automatically. Profile changes require constructing a new toolkit.
  • Don't run wren memory index while an agent is using the same project. The index operation drops and recreates the LanceDB schema table; concurrent reads may transiently fail.

License

Apache License 2.0. See LICENSE for the full text.

The names "Wren", "WrenAI", and the project's logos are trademarks of Canner, Inc. and are not licensed under Apache 2.0; their use is governed separately.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wren_pydantic-0.2.0.tar.gz (39.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wren_pydantic-0.2.0-py3-none-any.whl (29.9 kB view details)

Uploaded Python 3

File details

Details for the file wren_pydantic-0.2.0.tar.gz.

File metadata

  • Download URL: wren_pydantic-0.2.0.tar.gz
  • Upload date:
  • Size: 39.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for wren_pydantic-0.2.0.tar.gz
Algorithm Hash digest
SHA256 652e91c2e14b6fa71f49d1f8f829146f5d91c837e15f212acc17d93c411ff0cf
MD5 2a314b9ad2a57fe27d993d0c0df74800
BLAKE2b-256 7aa612529734f60716a52d688ffcee75f8692db8f1addb471605db0cab0cd768

See more details on using hashes here.

File details

Details for the file wren_pydantic-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: wren_pydantic-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 29.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for wren_pydantic-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0c2f013c0b3ddc0bf3790679e541be203095d6e79df1037252261a5576868a12
MD5 d533e4f3b4089b67c1892b93d9a61c03
BLAKE2b-256 7f693015d14e867def8b31fa3c217d176051fb8e2b3e3840beefe36a1fff86b7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page