AI agent that generates DashboardSpec JSON from any context data, powered by pydantic-ai
Project description
divami-labs-experiential-ui-agent
AI-powered dashboard generation package. Pass any user context dict and a role string; get back a fully structured DashboardSpec JSON ready for the React frontend to render — no transformation needed.
Built on pydantic-ai. Bring your own model — Google, OpenAI, or any pydantic-ai compatible provider.
Structure
package/
├── src/
│ └── dynamic_ui/
│ ├── __init__.py # Public API surface
│ ├── agent.py # Agent setup + generate_dashboard() entry point
│ ├── deps.py # AgentDeps runtime dependency container
│ ├── models.py # DashboardSpec Pydantic schema (mirrors frontend types.ts)
│ ├── prompt.py # System prompt for the final assembly step
│ ├── telemetry.py # Optional Logfire integration
│ └── tools.py # Six reasoning tools (pipeline steps 1–6)
└── pyproject.toml
Installation
Pick only the provider extras you need:
# Google Gemini
pip install "divami-labs-experiential-ui-agent[google]"
# OpenAI
pip install "divami-labs-experiential-ui-agent[openai]"
# Both + Logfire observability
pip install "divami-labs-experiential-ui-agent[all]"
| Provider | Extra | Env var required |
|---|---|---|
| Google Gemini | [google] |
GOOGLE_API_KEY |
| OpenAI | [openai] |
OPENAI_API_KEY |
| Logfire observability | [logfire] |
LOGFIRE_TOKEN |
| Everything | [all] |
— |
Quick start
from dynamic_ui import generate_dashboard, DashboardSpec
spec: DashboardSpec = await generate_dashboard(
model="google-gla:gemini-2.0-flash", # or "openai:gpt-4o", etc.
context={
"sales": [...],
"emails": [...],
},
user_role="VP of Sales",
user_persona=(
"Prefers high-level KPI cards before detailed charts. "
"Focuses on pipeline health and quota attainment. "
"Needs risk flags surfaced at the top of the dashboard."
),
user_prompt="What are my top priorities today?", # optional
)
# Pass directly to the frontend <DashboardRenderer> — no transformation needed.
payload = spec.model_dump()
With Logfire observability (optional)
Call once at application startup before any agent run:
from dynamic_ui import configure_logfire
configure_logfire(service_name="my-app")
Reads LOGFIRE_TOKEN from the environment automatically. Safe to call even when logfire is not installed — logs a warning and continues.
Usage in a FastAPI / async server
import asyncio
from dynamic_ui import generate_dashboard, DashboardSpec
# FastAPI example
from fastapi import FastAPI
app = FastAPI()
@app.post("/dashboard")
async def create_dashboard(role: str, context: dict, persona: str | None = None) -> dict:
spec: DashboardSpec = await generate_dashboard(
model="openai:gpt-4o",
context=context,
user_role=role,
user_persona=persona,
)
return spec.model_dump()
# Plain asyncio script
if __name__ == "__main__":
async def main():
spec = await generate_dashboard(
model="google-gla:gemini-2.0-flash",
context={"orders": [{"id": 1, "amount": 250}]},
user_role="Operations Manager",
user_persona="Needs cost and fulfilment KPIs front and centre.",
)
import json
print(json.dumps(spec.model_dump(), indent=2))
asyncio.run(main())
Accessing individual schema types
All Pydantic models and chart-type constants are re-exported from the top-level package:
from dynamic_ui import (
DashboardSpec,
DashboardRow,
CardWidget,
ChartWidget,
TableWidget,
ListWidget,
Metric,
SeriesConfig,
ListItem,
DataRecord,
CHART_BAR,
CHART_LINE,
CHART_PIE,
# ... see __init__.py for full list
)
Advanced: supply your own AgentDeps
AgentDeps is the runtime dependency container injected into every tool call. All fields available on AgentDeps are also first-class parameters of generate_dashboard():
| Parameter | Type | Required | Description |
|---|---|---|---|
model |
str | Model |
✅ | pydantic-ai model string, e.g. "openai:gpt-4o" |
context |
dict |
✅ | The user's raw data (sales records, emails, metrics, …) |
user_role |
str |
— | Job title / role, e.g. "VP of Sales" |
user_persona |
str | None |
— | Richer behavioural description — communication style, preferred detail level, key KPIs, pain points |
user_prompt |
str | None |
— | Explicit question the dashboard should answer |
extra_context |
dict | None |
— | Tenant info, feature flags, locale, theme palette, etc. |
from dynamic_ui import AgentDeps, generate_dashboard
# Using generate_dashboard directly (recommended)
spec = await generate_dashboard(
model="openai:gpt-4o",
context={...},
user_role="CFO",
user_persona=(
"Needs a single-screen P&L summary. "
"Prefers bar charts over tables. Risk items must appear first."
),
user_prompt="How are we tracking against Q2 targets?",
)
# Or build AgentDeps manually for advanced use
deps = AgentDeps(
context={...},
user_role="CFO",
user_persona="Needs a single-screen P&L summary.",
)
Agent pipeline
Each generate_dashboard() call runs six reasoning steps in order via pydantic-ai tools:
| Step | Tool | Purpose |
|---|---|---|
| 1 | analyse_persona |
Who is this user, what do they need today |
| 2 | analyse_data |
Scan CONTEXT_JSON for signals and anomalies |
| 3 | plan_interactions |
Assign channel keys, decide controller/follower pairs |
| 4 | plan_layout |
Design the section/grid skeleton |
| 5 | plan_navigation |
Decide which widgets open drilldown views |
| 6 | generate_dashboard_json |
Assemble and validate the final DashboardSpec |
DashboardSpec schema
The output mirrors frontend/src/types/index.ts exactly. Key types:
DashboardSpec
└── sections: DashboardRow[]
└── widgets: Widget[] # CardWidget | ChartWidget | TableWidget | ListWidget
Widget placement uses CSS grid — set colSpan / rowSpan; declare cols (and rows when using rowSpan) on the section.
Interactive widgets communicate via channel keys:
- Controller chart sets
broadcastOn: "ch_<noun>" - Follower widgets set
listenTo: "ch_<noun>"andreactionMode: "highlight" | "filter"
Development setup
cd backend/package
uv sync
Publishing to PyPI
Prerequisites
- Create an account on PyPI (and optionally TestPyPI for staging).
- Generate an API token at
Account Settings → API tokenson PyPI. - Install
uvif not already available:curl -LsSf https://astral.sh/uv/install.sh | sh
Step 1 — bump the version
Edit pyproject.toml and increment the version field (follow Semantic Versioning):
[project]
version = "0.2.0" # was 0.1.0
Step 2 — build the distribution artefacts
cd backend/package
uv build
This produces two files inside dist/:
divami_labs_experiential_ui_agent-<version>-py3-none-any.whl— wheel (fast install)divami_labs_experiential_ui_agent-<version>.tar.gz— source distribution
Step 3 — (optional) smoke-test on TestPyPI first
# Upload to the test index
uv publish --publish-url https://test.pypi.org/legacy/ --token pypi-<YOUR_TEST_TOKEN>
# Install from TestPyPI to verify
pip install --index-url https://test.pypi.org/simple/ \
--extra-index-url https://pypi.org/simple/ \
"divami-labs-experiential-ui-agent[google]"
Step 4 — publish to PyPI
uv publish --token pypi-<YOUR_PYPI_TOKEN>
Or configure the token once via the UV_PUBLISH_TOKEN environment variable so you don't pass it on every command:
export UV_PUBLISH_TOKEN="pypi-<YOUR_PYPI_TOKEN>"
uv publish
Tip: store the token in your CI/CD secret store (GitHub Actions
PYPI_API_TOKENsecret, etc.) and never commit it to the repository.
Step 5 — verify the release
pip install "divami-labs-experiential-ui-agent[google]" --upgrade
python -c "from dynamic_ui import generate_dashboard; print('OK')"
CI/CD (GitHub Actions example)
Create .github/workflows/publish.yml:
name: Publish to PyPI
on:
push:
tags:
- "v*" # trigger on version tags, e.g. v0.2.0
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
- name: Build
run: uv build
working-directory: backend/package
- name: Publish
run: uv publish
working-directory: backend/package
env:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
Add PYPI_API_TOKEN as a repository secret in Settings → Secrets and variables → Actions.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file divami_labs_experiential_ui_agent-0.1.1.tar.gz.
File metadata
- Download URL: divami_labs_experiential_ui_agent-0.1.1.tar.gz
- Upload date:
- Size: 25.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
20bc2a04ed83d2cf5831d3af4d81cf9f74470e8a040a7fae4e2128bed57f6937
|
|
| MD5 |
db93c3e5270315d859adbaaab7c1288f
|
|
| BLAKE2b-256 |
e1e7537b98110b524b1df94cf4aaaea0aa25bbe032eec2cc5bb19fcfb0307f1d
|
File details
Details for the file divami_labs_experiential_ui_agent-0.1.1-py3-none-any.whl.
File metadata
- Download URL: divami_labs_experiential_ui_agent-0.1.1-py3-none-any.whl
- Upload date:
- Size: 28.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.2 {"installer":{"name":"uv","version":"0.11.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0f48db5bb41c0e6a32f49bb5c0c70f2b6ffe04004b84a39263c7b24ff49881a2
|
|
| MD5 |
801e4ffb5af4e286afe41cb113851e35
|
|
| BLAKE2b-256 |
02d2db0440e71f1726121d0b72c49a2cb6ac2d426d0109fcced111c7e24c2ab6
|