Skip to main content

In-process SDK runtime for agent-search with optional callback-driven Langfuse tracing

Project description

agent-search core SDK

In-process Python SDK for agent-search.

The PyPI package is intentionally narrow: consumers should call advanced_rag(...) and treat that as the supported entrypoint.

The SDK always requires both:

  • A chat model (for example langchain_openai.ChatOpenAI)
  • A vector store that implements similarity_search(query, k, filter=None)

It does not auto-build these dependencies for you.

Install (PyPI)

python3.11 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install agent-search-core
python -c "import agent_search; print(agent_search.__file__)"

Quick start

from langchain_openai import ChatOpenAI
from agent_search import advanced_rag
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter

vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)

response = advanced_rag(
    "What is pgvector?",
    vector_store=vector_store,
    model=model,
)
print(response.output)

Contract notes for 1.0.3

Use these canonical names in new config payloads:

  • thread_id
  • custom_prompts
  • runtime_config

Compatibility notes:

  • custom-prompts is still accepted as an input alias, but new code should send custom_prompts.
  • advanced_rag(...) remains the supported sync entrypoint for agent-search-core.
  • For HITL flows, use the checkpointed runtime runner described below.

Human-in-the-loop (HITL)

agent-search-core supports HITL review/resume for subquestion decomposition via the checkpointed runtime runner. Use run_checkpointed_agent(...) with HITL controls to pause, inspect the interrupt payload, then resume with typed decisions.

Start a run with HITL enabled:

from langchain_openai import ChatOpenAI
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter
from agent_search.runtime.runner import run_checkpointed_agent
from services.agent_service import build_graph_run_metadata
from schemas import (
    RuntimeAgentRunControls,
    RuntimeAgentRunRequest,
    RuntimeHitlControl,
    RuntimeSubquestionHitlControl,
)

vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)
run_metadata = build_graph_run_metadata(thread_id="550e8400-e29b-41d4-a716-446655440000")

payload = RuntimeAgentRunRequest(
    query="Summarize the customer feedback themes.",
    thread_id=run_metadata.thread_id,
    controls=RuntimeAgentRunControls(
        hitl=RuntimeHitlControl(subquestions=RuntimeSubquestionHitlControl(enabled=True))
    ),
)

outcome = run_checkpointed_agent(
    payload,
    model=model,
    vector_store=vector_store,
    run_metadata=run_metadata,
)

Example pause payload (the run pauses at subquestions_ready):

print(outcome.status)  # "paused"
print(outcome.interrupt_payload)
# {
#   "kind": "subquestion_review",
#   "stage": "subquestions_ready",
#   "checkpoint_id": "550e8400-e29b-41d4-a716-446655440000",
#   "subquestions": [
#     {"subquestion_id": "sq-1", "sub_question": "Theme 1?", "index": 0},
#     {"subquestion_id": "sq-2", "sub_question": "Theme 2?", "index": 1},
#   ],
# }

Resume with typed decisions (approve, edit, deny, skip):

from schemas import RuntimeSubquestionDecision, RuntimeSubquestionResumeEnvelope

resume = RuntimeSubquestionResumeEnvelope(
    checkpoint_id=outcome.checkpoint_id,
    decisions=[
        RuntimeSubquestionDecision(subquestion_id="sq-1", action="approve"),
        RuntimeSubquestionDecision(
            subquestion_id="sq-2",
            action="edit",
            edited_text="Theme 2 (billing and invoices)",
        ),
    ],
)

resumed = run_checkpointed_agent(
    payload,
    model=model,
    vector_store=vector_store,
    run_metadata=run_metadata,
    resume=resume,
)
print(resumed.response.output)

Prompt customization

Keep reusable prompt defaults in the existing config map, then override only the keys you need per run.

from copy import deepcopy

from langchain_openai import ChatOpenAI
from agent_search import advanced_rag
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter

vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)

client_config = {
    "thread_id": "customer-42",
    "custom_prompts": {
        "subanswer": "Answer each sub-question with concise cited evidence only.",
        "synthesis": "Write a short final synthesis that preserves citation markers.",
    },
}

response = advanced_rag(
    "What changed in NATO maritime policy?",
    vector_store=vector_store,
    model=model,
    config=client_config,
)
print(response.output)

Per-run overrides should be merged into a fresh copy so one call does not mutate the reusable defaults for the next call.

run_config = deepcopy(client_config)
run_config["custom_prompts"] = {
    **run_config.get("custom_prompts", {}),
    "synthesis": "Return a two-paragraph answer and keep every citation marker.",
}

response = advanced_rag(
    "Summarize the policy shift for shipping operators.",
    vector_store=vector_store,
    model=model,
    config=run_config,
)

Merge and fallback behavior:

  • Built-in runtime defaults apply when custom_prompts is omitted.
  • Client-level config["custom_prompts"] replaces built-ins on a per-key basis.
  • Per-run merged values replace only the keys you override for that call.
  • Use custom_prompts in Python code; the supported keys are subanswer and synthesis.
  • Prompt overrides change generation instructions only. Citation validation and fallback behavior remain enforced in runtime code.

You can keep reusable prompt defaults at the top level and place per-run overrides in runtime_config.custom_prompts:

response = advanced_rag(
    "Which runtime controls stay default-off?",
    vector_store=vector_store,
    model=model,
    config={
        "thread_id": "550e8400-e29b-41d4-a716-446655440310",
        "custom_prompts": {
            "subanswer": "Answer each sub-question with concise cited evidence only.",
            "synthesis": "Write a short synthesis with citations.",
        },
        "runtime_config": {
            "custom_prompts": {
                "synthesis": "Return a two-paragraph answer and keep every citation marker."
            }
        },
    },
)

runtime_config is additive. Omit it to preserve the prior prompt behavior.

Requirements

  • Python >=3.11,<3.14
  • A compatible vector store and chat model as shown above.

Build

cd sdk/core
python -m build

Supported API

The supported callable exported by agent_search is:

  • advanced_rag

Notes about advanced_rag(...):

  • It is a synchronous call that runs the full retrieval-and-answer workflow and returns a RuntimeAgentRunResponse.
  • You supply the model and vector store; the SDK orchestrates the LangGraph-based runtime around them.
  • Optional config={"thread_id": "..."} lets you pass a stable execution identity into the run.
  • If you pass langfuse_callback=..., the SDK includes that callback in runtime tracing.
  • langfuse_settings is accepted for compatibility but ignored unless you provide an explicit langfuse_callback.

advanced_rag(...) output schema:

RuntimeAgentRunResponse(
  main_question: str,
  thread_id: str,
  sub_answers: list[SubQuestionAnswer],
  sub_qa: list[SubQuestionAnswer],
  output: str,
  final_citations: list[CitationSourceRow],
)

Read additive sub-answer fields with a compatibility fallback:

sub_answers = response.sub_answers or response.sub_qa
for item in sub_answers:
    print(item.sub_question, item.sub_answer)

sub_answers is the canonical additive field for new reads. sub_qa remains available for compatibility.

Vector store compatibility

Runtime SDK expects similarity_search(query, k, filter=None). For LangChain-backed stores, use:

  • agent_search.vectorstore.langchain_adapter.LangChainVectorStoreAdapter

Notes

  • This package is the SDK surface only. For the full app experience, run the repository with Docker Compose.
  • The PyPI package is intentionally narrower than the backend internals; consumer integrations should rely on advanced_rag(...) only.
  • For SDK-only use, install from PyPI and supply your own model + vector store.

Release guidance

Use the repository release script from project root:

./scripts/release_sdk.sh

The release script verifies the built wheel includes the agent_search package before upload.

Publish flow (requires TWINE_API_TOKEN):

PUBLISH=1 TWINE_API_TOKEN=*** ./scripts/release_sdk.sh

Tag format used by CI release workflow:

  • agent-search-core-v<version>

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_search_core-1.0.6.tar.gz (68.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_search_core-1.0.6-py3-none-any.whl (94.2 kB view details)

Uploaded Python 3

File details

Details for the file agent_search_core-1.0.6.tar.gz.

File metadata

  • Download URL: agent_search_core-1.0.6.tar.gz
  • Upload date:
  • Size: 68.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.13

File hashes

Hashes for agent_search_core-1.0.6.tar.gz
Algorithm Hash digest
SHA256 963cfae4a0d9c18d001f37a82c4110307f70de3568e86f10be5de49f182237ff
MD5 b9ad48162970391b71b3f2d10c34d7d1
BLAKE2b-256 4ae4f22d121eeaf1f8ac0eb62c77934066c921d185f218858f14fb754cfa2a28

See more details on using hashes here.

File details

Details for the file agent_search_core-1.0.6-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_search_core-1.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 61da0f5cc7a8bde2e7610823301b39348c98f5a59074e2fb55baf89d7c98c384
MD5 dc2d5aa7c5fbc5302f92eecbdff71e79
BLAKE2b-256 579444aa5f90954d9c66a4cecce1949d06ce4033993644a58214560ca1c9b7d7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page