Skip to main content

In-process SDK runtime for agent-search

Project description

agent-search core SDK

In-process Python SDK for agent-search.

The PyPI package is intentionally narrow: consumers should call advanced_rag(...) and treat that as the supported entrypoint.

The SDK always requires both:

  • A chat model (for example langchain_openai.ChatOpenAI)
  • A vector store that implements similarity_search(query, k, filter=None)

It does not auto-build these dependencies for you.

Install (PyPI)

python3.11 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install agent-search-core
python -c "import agent_search; print(agent_search.__file__)"

Quick start

from langchain_openai import ChatOpenAI
from agent_search import advanced_rag
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter

vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)

response = advanced_rag(
    "What is pgvector?",
    vector_store=vector_store,
    model=model,
)
print(response.output)

Contract notes for 1.0.12

Use these canonical names in new config payloads:

  • custom_prompts
  • runtime_config

Compatibility notes:

  • custom-prompts is still accepted as an input alias, but new code should send custom_prompts.
  • advanced_rag(...) remains the supported sync entrypoint for agent-search-core.
  • For HITL flows, use the checkpointed runtime runner described below.
  • Langfuse tracing is no longer supported in the SDK/runtime.

Human-in-the-loop (HITL)

agent-search-core supports one opt-in review stage on advanced_rag(...):

  • hitl_subquestions=True pauses after decomposition so the caller can review or edit subquestions.
  • Subquestion review is the only HITL checkpoint in the SDK.
  • Query expansion no longer has a separate review checkpoint.

The SDK returns a normalized review object when a run pauses, and resume calls use SDK-owned decision helpers instead of raw backend payloads.

HITL does still require checkpoint persistence, but the SDK no longer falls back to DATABASE_URL for that. The caller must provide the checkpoint Postgres database explicitly on every checkpointed call:

  • Provision a reachable Postgres database for LangGraph checkpoints before enabling HITL.
  • Pass checkpoint_db_url="postgresql+psycopg://..." to advanced_rag(...) for the initial HITL call and every resume call.
  • The runtime uses that caller-provided Postgres DB for checkpoint persistence only.
  • On first use, the runtime checks whether that DB already has LangGraph checkpoint tables (checkpoint_migrations, checkpoints, checkpoint_blobs, checkpoint_writes) and bootstraps them only when missing.

Example paused result for subquestion review:

from agent_search import advanced_rag

outcome = advanced_rag(
    "Summarize the customer feedback themes.",
    vector_store=vector_store,
    model=model,
    hitl_subquestions=True,
    checkpoint_db_url="postgresql+psycopg://agent_user:agent_pass@localhost:5432/agent_search",
)
print(outcome.status)  # "paused"
print(outcome.review.kind)  # "subquestion_review"
print(outcome.review.items[0].text)

Resume with SDK helpers:

resume = outcome.review.with_decisions(
    outcome.review.items[0].approve(),
    outcome.review.items[1].edit("Theme 2 (billing and invoices)"),
)

resumed = advanced_rag(
    "Summarize the customer feedback themes.",
    model=model,
    vector_store=vector_store,
    resume=resume,
    checkpoint_db_url="postgresql+psycopg://agent_user:agent_pass@localhost:5432/agent_search",
)
print(resumed.response.output)

For simple approval flows:

resume = outcome.review.approve_all()

Detailed end-to-end example:

from langchain_openai import ChatOpenAI
from agent_search import advanced_rag
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter

vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)
question = "Summarize the customer feedback themes from the support archive."

first = advanced_rag(
    question,
    vector_store=vector_store,
    model=model,
    hitl_subquestions=True,
    checkpoint_db_url="postgresql+psycopg://agent_user:agent_pass@localhost:5432/agent_search",
)
assert first.status == "paused"
assert first.review.kind == "subquestion_review"

for item in first.review.items:
    print(item.item_id, item.text)

resume = first.review.with_decisions(
    first.review.items[0].approve(),
    first.review.items[1].edit("What billing and invoice complaints show up most often?"),
    first.review.items[2].reject(),
)

final = advanced_rag(
    question,
    vector_store=vector_store,
    model=model,
    resume=resume,
    checkpoint_db_url="postgresql+psycopg://agent_user:agent_pass@localhost:5432/agent_search",
)
assert final.status == "completed"
print(final.response.output)

Decision semantics:

  • approve() keeps the item unchanged.
  • edit("...") replaces the item text before the run continues.
  • reject() removes the item from the next stage entirely.
  • approve_all() is the shortcut when you want to resume without per-item changes.

Advanced callers can still pass raw config["controls"]["hitl"], but the top-level HITL review toggles are now the preferred public API.

Prompt customization

Keep reusable prompt defaults in the existing config map, then override only the keys you need per run.

from copy import deepcopy

from langchain_openai import ChatOpenAI
from agent_search import advanced_rag
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter

vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)

client_config = {
    "custom_prompts": {
        "subanswer": "Answer each sub-question with concise cited evidence only.",
        "synthesis": "Write a short final synthesis that preserves citation markers.",
    },
}

response = advanced_rag(
    "What changed in NATO maritime policy?",
    vector_store=vector_store,
    model=model,
    config=client_config,
)
print(response.output)

Per-run overrides should be merged into a fresh copy so one call does not mutate the reusable defaults for the next call.

run_config = deepcopy(client_config)
run_config["custom_prompts"] = {
    **run_config.get("custom_prompts", {}),
    "synthesis": "Return a two-paragraph answer and keep every citation marker.",
}

response = advanced_rag(
    "Summarize the policy shift for shipping operators.",
    vector_store=vector_store,
    model=model,
    config=run_config,
)

Merge and fallback behavior:

  • Built-in runtime defaults apply when custom_prompts is omitted.
  • Client-level config["custom_prompts"] replaces built-ins on a per-key basis.
  • Per-run merged values replace only the keys you override for that call.
  • Use custom_prompts in Python code; the supported keys are subanswer and synthesis.
  • Prompt overrides change generation instructions only. Citation validation and fallback behavior remain enforced in runtime code.

You can keep reusable prompt defaults at the top level and place per-run overrides in runtime_config.custom_prompts:

response = advanced_rag(
    "Which runtime controls stay default-off?",
    vector_store=vector_store,
    model=model,
    config={
        "custom_prompts": {
            "subanswer": "Answer each sub-question with concise cited evidence only.",
            "synthesis": "Write a short synthesis with citations.",
        },
        "runtime_config": {
            "custom_prompts": {
                "synthesis": "Return a two-paragraph answer and keep every citation marker."
            }
        },
    },
)

runtime_config is additive. Omit it to preserve the prior prompt behavior.

Requirements

  • Python >=3.11,<3.14
  • A compatible vector store and chat model as shown above.

Build

cd sdk/core
python -m build

Example script

A self-contained HITL walkthrough that imports the SDK and simulates pause/resume decisions lives at examples/hitl_walkthrough.py.

Run it from the package root:

cd sdk/core
python examples/hitl_walkthrough.py

Supported API

The supported callable exported by agent_search is:

  • advanced_rag

Notes about advanced_rag(...):

  • It is a synchronous call that runs the full retrieval-and-answer workflow and returns a RuntimeAgentRunResponse.
  • You supply the model and vector store; the SDK orchestrates the LangGraph-based runtime around them.
  • Optional hitl_subquestions=True opts into subquestion review checkpoints.
  • Checkpointed runs must also pass checkpoint_db_url so the SDK can use that Postgres DB for LangGraph checkpoints.
  • Optional config={"custom_prompts": {...}} lets you override prompt instructions per run.

advanced_rag(...) output schema:

RuntimeAgentRunResponse(
  main_question: str,
  sub_answers: list[SubQuestionAnswer],
  sub_qa: list[SubQuestionAnswer],
  output: str,
  final_citations: list[CitationSourceRow],
)

sub_answers and sub_qa are answer rows. Each item contains both the sub-question text and the corresponding sub-answer text.

Read additive sub-answer fields like this:

for item in response.sub_answers:
    print(item.sub_question, item.sub_answer)

sub_answers is the canonical additive field for new reads. sub_qa remains available as the compatibility alias, and the SDK backfills whichever one is omitted so both fields resolve to the same answer rows.

If you need the plain decomposed question list without answers, read decomposition_sub_questions from the async status payload instead of RuntimeAgentRunResponse:

status = client.get_run_status(job_id)

for sub_question in status.decomposition_sub_questions:
    print(sub_question)

for item in status.sub_answers:
    print(item.sub_question, item.sub_answer)

Those fields are intentionally separate:

  • decomposition_sub_questions: list[str] of generated sub-questions only.
  • sub_answers: list[SubQuestionAnswer] with question-and-answer pairs.
  • sub_qa: compatibility alias for sub_answers.

Vector store compatibility

Runtime SDK expects similarity_search(query, k, filter=None). For LangChain-backed stores, use:

  • agent_search.vectorstore.langchain_adapter.LangChainVectorStoreAdapter

Notes

  • This package is the SDK surface only. For the full app experience, run the repository with Docker Compose.
  • The PyPI package is intentionally narrower than the backend internals; consumer integrations should rely on advanced_rag(...) only.
  • For SDK-only use, install from PyPI and supply your own model + vector store.

Release guidance

Use the repository release script from project root:

./scripts/release_sdk.sh

The release script verifies the built wheel includes the agent_search package before upload.

Publish flow (requires TWINE_API_TOKEN):

PUBLISH=1 TWINE_API_TOKEN=*** ./scripts/release_sdk.sh

Tag format used by CI release workflow:

  • agent-search-core-v<version>

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_search_core-1.0.12.tar.gz (70.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_search_core-1.0.12-py3-none-any.whl (96.0 kB view details)

Uploaded Python 3

File details

Details for the file agent_search_core-1.0.12.tar.gz.

File metadata

  • Download URL: agent_search_core-1.0.12.tar.gz
  • Upload date:
  • Size: 70.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for agent_search_core-1.0.12.tar.gz
Algorithm Hash digest
SHA256 7ed826aa688459aae74bdb44507c2972cb5976f896c768c41fea8e66d0896568
MD5 5a6238bf8ffd696f9701bfa55612537e
BLAKE2b-256 e57ebe52368ca5645ee2ea2696f410668e9016f5de6e30c08fb2c101f995fe80

See more details on using hashes here.

File details

Details for the file agent_search_core-1.0.12-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_search_core-1.0.12-py3-none-any.whl
Algorithm Hash digest
SHA256 5caa597e89894de9c2b38bc5c69e0ffb9e317942aa9fc5ee364c3086eec5388a
MD5 b8b196430ad91e6838e627caae1b558b
BLAKE2b-256 4e06f3eac153578082bba4963eefce1c5a7f6a12f3a31f3cb61c7a6491622195

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page