In-process SDK runtime for agent-search with optional callback-driven Langfuse tracing
Project description
agent-search core SDK
In-process Python SDK for agent-search.
The PyPI package is intentionally narrow: consumers should call advanced_rag(...) and treat that as the supported entrypoint.
The SDK always requires both:
- A chat model (for example
langchain_openai.ChatOpenAI) - A vector store that implements
similarity_search(query, k, filter=None)
It does not auto-build these dependencies for you.
Install (PyPI)
python3.11 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install agent-search-core
python -c "import agent_search; print(agent_search.__file__)"
Quick start
from langchain_openai import ChatOpenAI
from agent_search import advanced_rag
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter
vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)
response = advanced_rag(
"What is pgvector?",
vector_store=vector_store,
model=model,
)
print(response.output)
Contract notes for 1.0.8
Use these canonical names in new config payloads:
thread_idcustom_promptsruntime_config
Compatibility notes:
custom-promptsis still accepted as an input alias, but new code should sendcustom_prompts.advanced_rag(...)remains the supported sync entrypoint foragent-search-core.- For HITL flows, use the checkpointed runtime runner described below.
Human-in-the-loop (HITL)
agent-search-core supports two opt-in review stages on advanced_rag(...):
hitl_subquestions=Truepauses after decomposition so the caller can review or edit subquestions.hitl_query_expansion=Truepauses before search expansion execution so the caller can review or edit expanded queries.
You can enable either stage or both. The SDK returns a normalized review object when a run pauses, and resume calls use SDK-owned decision helpers instead of raw backend payloads.
HITL does still require checkpoint persistence. The public API does not ask you to pass a checkpointer because advanced_rag(...) creates one internally with LangGraph's PostgresSaver and resumes from the stored checkpoint ID on the next call. In practice that means:
- A reachable Postgres database must be configured.
- The SDK uses
DATABASE_URLand defaults topostgresql+psycopg://agent_user:agent_pass@db:5432/agent_search. - If you run outside Docker, set
DATABASE_URLexplicitly so the SDK can persist and resume paused runs.
Example paused result for subquestion review:
from agent_search import advanced_rag
outcome = advanced_rag(
"Summarize the customer feedback themes.",
vector_store=vector_store,
model=model,
hitl_subquestions=True,
)
print(outcome.status) # "paused"
print(outcome.review.kind) # "subquestion_review"
print(outcome.review.items[0].text)
Example paused result for query expansion review:
outcome = advanced_rag(
"Summarize the customer feedback themes.",
vector_store=vector_store,
model=model,
hitl_query_expansion=True,
)
print(outcome.status) # "paused"
print(outcome.review.kind) # "query_expansion_review"
print(outcome.review.items[0].text)
Enable both review stages in one run:
outcome = advanced_rag(
"Summarize the customer feedback themes.",
vector_store=vector_store,
model=model,
hitl_subquestions=True,
hitl_query_expansion=True,
)
Resume with SDK helpers:
resume = outcome.review.with_decisions(
outcome.review.items[0].approve(),
outcome.review.items[1].edit("Theme 2 (billing and invoices)"),
)
resumed = advanced_rag(
"Summarize the customer feedback themes.",
model=model,
vector_store=vector_store,
resume=resume,
)
print(resumed.response.output)
For simple approval flows:
resume = outcome.review.approve_all()
Detailed end-to-end example with both pause stages:
from langchain_openai import ChatOpenAI
from agent_search import advanced_rag
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter
vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)
question = "Summarize the customer feedback themes from the support archive."
thread_id = "550e8400-e29b-41d4-a716-446655440230"
first = advanced_rag(
question,
vector_store=vector_store,
model=model,
hitl_subquestions=True,
hitl_query_expansion=True,
config={"thread_id": thread_id},
)
assert first.status == "paused"
assert first.review.kind == "subquestion_review"
for item in first.review.items:
print(item.item_id, item.text)
subquestion_resume = first.review.with_decisions(
first.review.items[0].approve(),
first.review.items[1].edit("What billing and invoice complaints show up most often?"),
first.review.items[2].reject(),
)
second = advanced_rag(
question,
vector_store=vector_store,
model=model,
resume=subquestion_resume,
)
assert second.status == "paused"
assert second.review.kind == "query_expansion_review"
for item in second.review.items:
print(item.item_id, item.text)
query_resume = second.review.with_decisions(
second.review.items[0].approve(),
second.review.items[1].edit("invoice confusion refunds duplicate charge complaints"),
)
final = advanced_rag(
question,
vector_store=vector_store,
model=model,
resume=query_resume,
)
assert final.status == "completed"
print(final.response.output)
Decision semantics:
approve()keeps the item unchanged.edit("...")replaces the item text before the run continues.reject()removes the item from the next stage entirely.approve_all()is the shortcut when you want to resume without per-item changes.- If you set
config["thread_id"], use a valid UUID string.
Advanced callers can still pass raw config["controls"]["hitl"], but the top-level HITL review toggles are now the preferred public API.
Prompt customization
Keep reusable prompt defaults in the existing config map, then override only the keys you need per run.
from copy import deepcopy
from langchain_openai import ChatOpenAI
from agent_search import advanced_rag
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter
vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)
client_config = {
"thread_id": "customer-42",
"custom_prompts": {
"subanswer": "Answer each sub-question with concise cited evidence only.",
"synthesis": "Write a short final synthesis that preserves citation markers.",
},
}
response = advanced_rag(
"What changed in NATO maritime policy?",
vector_store=vector_store,
model=model,
config=client_config,
)
print(response.output)
Per-run overrides should be merged into a fresh copy so one call does not mutate the reusable defaults for the next call.
run_config = deepcopy(client_config)
run_config["custom_prompts"] = {
**run_config.get("custom_prompts", {}),
"synthesis": "Return a two-paragraph answer and keep every citation marker.",
}
response = advanced_rag(
"Summarize the policy shift for shipping operators.",
vector_store=vector_store,
model=model,
config=run_config,
)
Merge and fallback behavior:
- Built-in runtime defaults apply when
custom_promptsis omitted. - Client-level
config["custom_prompts"]replaces built-ins on a per-key basis. - Per-run merged values replace only the keys you override for that call.
- Use
custom_promptsin Python code; the supported keys aresubanswerandsynthesis. - Prompt overrides change generation instructions only. Citation validation and fallback behavior remain enforced in runtime code.
You can keep reusable prompt defaults at the top level and place per-run overrides in runtime_config.custom_prompts:
response = advanced_rag(
"Which runtime controls stay default-off?",
vector_store=vector_store,
model=model,
config={
"thread_id": "550e8400-e29b-41d4-a716-446655440310",
"custom_prompts": {
"subanswer": "Answer each sub-question with concise cited evidence only.",
"synthesis": "Write a short synthesis with citations.",
},
"runtime_config": {
"custom_prompts": {
"synthesis": "Return a two-paragraph answer and keep every citation marker."
}
},
},
)
runtime_config is additive. Omit it to preserve the prior prompt behavior.
Requirements
- Python
>=3.11,<3.14 - A compatible vector store and chat model as shown above.
Build
cd sdk/core
python -m build
Example script
A self-contained HITL walkthrough that imports the SDK and simulates pause/resume decisions lives at examples/hitl_walkthrough.py.
Run it from the package root:
cd sdk/core
python examples/hitl_walkthrough.py
Supported API
The supported callable exported by agent_search is:
advanced_rag
Notes about advanced_rag(...):
- It is a synchronous call that runs the full retrieval-and-answer workflow and returns a
RuntimeAgentRunResponse. - You supply the model and vector store; the SDK orchestrates the LangGraph-based runtime around them.
- Optional
config={"thread_id": "..."}lets you pass a stable execution identity into the run. - Optional
hitl_subquestions=Trueandhitl_query_expansion=Trueopt into user review checkpoints. - If you pass
langfuse_callback=..., the SDK includes that callback in runtime tracing. langfuse_settingsis accepted for compatibility but ignored unless you provide an explicitlangfuse_callback.
advanced_rag(...) output schema:
RuntimeAgentRunResponse(
main_question: str,
thread_id: str,
sub_answers: list[SubQuestionAnswer],
sub_qa: list[SubQuestionAnswer],
output: str,
final_citations: list[CitationSourceRow],
)
Read additive sub-answer fields with a compatibility fallback:
sub_answers = response.sub_answers or response.sub_qa
for item in sub_answers:
print(item.sub_question, item.sub_answer)
sub_answers is the canonical additive field for new reads. sub_qa remains available for compatibility.
Vector store compatibility
Runtime SDK expects similarity_search(query, k, filter=None).
For LangChain-backed stores, use:
agent_search.vectorstore.langchain_adapter.LangChainVectorStoreAdapter
Notes
- This package is the SDK surface only. For the full app experience, run the repository with Docker Compose.
- The PyPI package is intentionally narrower than the backend internals; consumer integrations should rely on
advanced_rag(...)only. - For SDK-only use, install from PyPI and supply your own model + vector store.
Release guidance
Use the repository release script from project root:
./scripts/release_sdk.sh
The release script verifies the built wheel includes the agent_search package before upload.
Publish flow (requires TWINE_API_TOKEN):
PUBLISH=1 TWINE_API_TOKEN=*** ./scripts/release_sdk.sh
Tag format used by CI release workflow:
agent-search-core-v<version>
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_search_core-1.0.8.tar.gz.
File metadata
- Download URL: agent_search_core-1.0.8.tar.gz
- Upload date:
- Size: 71.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bec079d9f23833c41d5e1cd2ff4727d6f6d4b3ede5c2d8dbabe1002cb5f3e56c
|
|
| MD5 |
58b8bcc4e0fe566455008187043c03dd
|
|
| BLAKE2b-256 |
54e8c452a0cbbdb915d5ff5c8a748808611db74eaeff015e3e2fe302eb38e081
|
File details
Details for the file agent_search_core-1.0.8-py3-none-any.whl.
File metadata
- Download URL: agent_search_core-1.0.8-py3-none-any.whl
- Upload date:
- Size: 97.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0b168bbadb0a2ade69eec7f2ee1a2396dca16d15b9c5e20135e56a6746914f5a
|
|
| MD5 |
bccd46e36785b84b6498b512b46503c2
|
|
| BLAKE2b-256 |
537bee10f5ae685e3e0bbd5b70dcb1f96fd4f6221972789b709b3061385b5ca7
|