Python SDK for Promptev — fetch prompts and run AI agents programmatically
Project description
Promptev Python SDK
The official Python SDK for Promptev.ai — run AI prompts and agents programmatically using your project API key.
Installation
pip install promptev
Requires Python 3.7+. Only dependency: httpx.
Quick Start
from promptev import PromptevClient
client = PromptevClient(project_key="pv_sk_your_key_here")
# Run a prompt (compiles + executes if model is configured)
result = client.run_prompt("support-agent",
query="What is the refund policy?",
variables={"company": "Acme Corp"}
)
print(result)
# Chat with an AI agent
session = client.start_agent("your-agent-id")
for event in client.stream_agent(
session.chatbot_id,
session_token=session.session_token,
query="Summarize our Q4 sales report"
):
if event.type == "done":
print(event.output)
Prompts
Promptev prompts are versioned, server-managed templates. run_prompt compiles the template with your variables — and if the prompt has a model configured in Promptev, it also executes it against the LLM and returns the AI response directly.
Run a prompt with variables
result = client.run_prompt("support-agent",
query="How do I reset my password?",
variables={"company": "Acme Corp", "tone": "professional"}
)
Run a prompt without variables
result = client.run_prompt("knowledge-base",
query="What is the refund policy?"
)
With a model configured (auto-execute)
If your prompt has a model and/or context packs attached in Promptev, run_prompt compiles the template, retrieves relevant context via RAG, sends it to the LLM, and returns the AI response:
answer = client.run_prompt("support-agent",
query="What is the refund policy?",
variables={"company": "Acme Corp"}
)
print(answer) # "Our refund policy allows returns within 30 days..."
Without a model (use with your own LLM)
If no model is configured, run_prompt returns the compiled template — use it with any LLM:
from openai import OpenAI
promptev = PromptevClient(project_key="pv_sk_...")
openai_client = OpenAI(api_key="sk-...")
system_prompt = promptev.run_prompt("support-agent",
query="How do I reset my password?",
variables={"company": "Acme Corp", "tone": "professional"}
)
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": "How do I reset my password?"}
]
)
print(response.choices[0].message.content)
Stream a prompt (with tools or real-time output)
When a prompt has tools attached (Jira, Slack, GitHub, etc.) or you want real-time output, use stream_prompt. It returns SSE events — same format as agent streaming:
for event in client.stream_prompt("research-assistant",
query="Find all P1 bugs assigned to me",
variables={"project": "ACME"}
):
if event.type == "thoughts":
print(f"Thinking: {event.output}")
elif event.type == "processing":
print(f"Running: {event.output}")
elif event.type == "done":
print(event.output)
When to use which:
run_prompt()— simple prompt execution, no tools, returns a stringstream_prompt()— prompts with tools, RAG-heavy queries, or when you want real-time output
Agents
Promptev agents are deployed AI assistants with built-in memory, tools (Jira, Slack, GitHub, etc.), and RAG context packs. The SDK lets you start sessions and stream responses in real time.
Start a session
session = client.start_agent("your-agent-id", visitor="John")
print(session.session_token) # Use this for all subsequent messages
print(session.name) # Agent display name
print(session.memory_enabled) # Whether agent retains conversation context
Stream a response
The agent responds via Server-Sent Events (SSE). Each event has a type and output:
| Event Type | Description |
|---|---|
thoughts |
Agent's internal reasoning |
processing |
Tool execution status (e.g., "Searching Jira...") |
approval_required |
Agent needs permission to run a tool |
done |
Final response text |
error |
Something went wrong |
for event in client.stream_agent(
session.chatbot_id,
session_token=session.session_token,
query="What are the open P1 bugs in our backlog?"
):
if event.type == "thoughts":
print(f"Thinking: {event.output}")
elif event.type == "processing":
print(f"Running: {event.output}")
elif event.type == "done":
print(f"\n{event.output}")
elif event.type == "error":
print(f"Error: {event.output}")
Multi-turn conversation
The session token maintains conversation context across messages:
session = client.start_agent("your-agent-id", visitor="Sarah")
# First message
for event in client.stream_agent(
session.chatbot_id,
session_token=session.session_token,
query="Summarize our Q4 sales report"
):
if event.type == "done":
print(event.output)
# Follow-up — agent remembers the previous context
for event in client.stream_agent(
session.chatbot_id,
session_token=session.session_token,
query="Compare that with Q3"
):
if event.type == "done":
print(event.output)
Collect the final response only
If you only need the final text and don't care about intermediate events:
def ask_agent(client, session, query):
"""Send a message and return only the final response."""
for event in client.stream_agent(
session.chatbot_id,
session_token=session.session_token,
query=query,
):
if event.type == "done":
return event.output
if event.type == "error":
raise RuntimeError(event.output)
answer = ask_agent(client, session, "What's our monthly churn rate?")
print(answer)
Async Usage
Every method has an async variant prefixed with a. Use these in FastAPI, notebooks, or any async context:
import asyncio
from promptev import PromptevClient
async def main():
async with PromptevClient(project_key="pv_sk_...") as client:
# Async prompt execution
result = await client.arun_prompt("support-agent",
query="What is the refund policy?",
variables={"company": "Acme Corp"}
)
# Async agent session
session = await client.astart_agent("your-agent-id", visitor="Ava")
# Async streaming
async for event in client.astream_agent(
session.chatbot_id,
session_token=session.session_token,
query="How many support tickets came in today?"
):
if event.type == "done":
print(event.output)
asyncio.run(main())
FastAPI integration
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from promptev import PromptevClient
app = FastAPI()
client = PromptevClient(project_key="pv_sk_...")
@app.post("/ask")
async def ask(agent_id: str, query: str):
session = await client.astart_agent(agent_id)
async def generate():
async for event in client.astream_agent(
session.chatbot_id,
session_token=session.session_token,
query=query,
):
yield f"data: {event.raw}\n\n"
return StreamingResponse(generate(), media_type="text/event-stream")
Error Handling
The SDK raises typed exceptions for each failure scenario:
from promptev import (
PromptevClient,
ValidationError,
AuthenticationError,
NotFoundError,
RateLimitError,
ServerError,
NetworkError,
)
client = PromptevClient(project_key="pv_sk_...")
try:
result = client.run_prompt("my-prompt", query="Tell me about Ava", variables={"name": "Ava"})
except ValidationError as e:
# 400 — missing variables, bad input
print(f"Invalid request: {e}")
except NotFoundError as e:
# 404 — prompt or project not found
print(f"Not found: {e}")
except AuthenticationError as e:
# 401/403 — invalid API key or agent not active
print(f"Auth error: {e}")
except RateLimitError as e:
# 429 — API usage quota exceeded
print(f"Rate limited: {e}")
except ServerError as e:
# 5xx — server error (after retries exhausted)
print(f"Server error: {e}")
except NetworkError as e:
# Connection failed, timeout, DNS error
print(f"Network error: {e}")
All exceptions inherit from PromptevError and include:
e.status_code— HTTP status code (if applicable)e.response_text— Raw response body (for debugging)
Configuration
client = PromptevClient(
project_key="pv_sk_...", # Required — your project API key
base_url="https://api.promptev.ai", # Default — override for self-hosted
timeout=30.0, # Default — request timeout in seconds
max_retries=2, # Default — retries for 502/503/504
headers={"X-Custom": "value"}, # Optional — extra HTTP headers
)
| Parameter | Default | Description |
|---|---|---|
project_key |
required | Your Promptev project API key |
base_url |
https://api.promptev.ai |
API base URL |
timeout |
30.0 |
Request timeout in seconds |
max_retries |
2 |
Automatic retries for transient server errors (502, 503, 504) |
headers |
None |
Additional HTTP headers |
API Reference
PromptevClient
| Method | Description | Returns |
|---|---|---|
run_prompt(prompt_key, query, variables?) |
Compile and execute a prompt (sync) | str |
arun_prompt(prompt_key, query, variables?) |
Compile and execute a prompt (async) | str |
stream_prompt(prompt_key, query, variables?) |
Stream prompt execution with tools (sync) | Iterator[AgentEvent] |
astream_prompt(prompt_key, query, variables?) |
Stream prompt execution with tools (async) | AsyncIterator[AgentEvent] |
start_agent(chatbot_id, *, visitor?, platform?) |
Start agent session (sync) | AgentSession |
astart_agent(chatbot_id, *, visitor?, platform?) |
Start agent session (async) | AgentSession |
stream_agent(chatbot_id, *, session_token, query) |
Stream agent response (sync) | Iterator[AgentEvent] |
astream_agent(chatbot_id, *, session_token, query) |
Stream agent response (async) | AsyncIterator[AgentEvent] |
close() |
Close HTTP clients (sync) | None |
aclose() |
Close HTTP clients (async) | None |
AgentSession
| Field | Type | Description |
|---|---|---|
session_token |
str |
Token for subsequent stream calls |
chatbot_id |
str |
Agent identifier |
name |
str |
Agent display name |
memory_enabled |
bool |
Whether agent retains conversation context |
messages |
list |
Previous messages (populated when resuming a session) |
AgentEvent
| Field | Type | Description |
|---|---|---|
type |
str |
Event type: thoughts, processing, done, error, approval_required |
output |
str |
Event content text |
raw |
dict |
Full parsed SSE event data |
Exceptions
| Exception | HTTP Status | When |
|---|---|---|
ValidationError |
400 | Missing required variables, bad input |
AuthenticationError |
401, 403 | Invalid API key, agent not active |
NotFoundError |
404 | Project, prompt, or agent not found |
RateLimitError |
429 | API usage quota exceeded |
ServerError |
5xx | Server error (after retries exhausted) |
NetworkError |
— | Connection failed, timeout, DNS error |
PromptevError |
any | Base class for all above exceptions |
License
This SDK is commercial software by Promptev Inc.
- Free tier use allowed
- Production use requires an active subscription
See LICENSE for full terms.
Support
- Website: promptev.ai
- Email: support@promptev.ai
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptev-0.1.0.tar.gz.
File metadata
- Download URL: promptev-0.1.0.tar.gz
- Upload date:
- Size: 14.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
23c34834e2e4b6c4c09e81eb434188a31e7e8edf489329468a220ea9b95f3a92
|
|
| MD5 |
7912b8c157f0fb4f5bd05e4cae0c76b1
|
|
| BLAKE2b-256 |
b3dada043f972b12119323fe745206d5b6bb8d7711acfc84ea0b8d9e78fc8579
|
File details
Details for the file promptev-0.1.0-py3-none-any.whl.
File metadata
- Download URL: promptev-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ed4071d83354d8ce52b672c0ffc2aeb5614ee48b215bf26c4b0d7390355f4f48
|
|
| MD5 |
61cac93cbea93902dfee19371dfa23d6
|
|
| BLAKE2b-256 |
0e6a4296f488b0a8322195c85575ff58de6422cef82a80777f97900d3d458c6f
|