Skip to main content

In-process SDK runtime for agent-search with optional callback-driven Langfuse tracing

Project description

agent-search core SDK

In-process Python SDK for agent-search.

The PyPI package is intentionally narrow: consumers should call advanced_rag(...) and treat that as the supported entrypoint.

The SDK always requires both:

  • A chat model (for example langchain_openai.ChatOpenAI)
  • A vector store that implements similarity_search(query, k, filter=None)

It does not auto-build these dependencies for you.

Install (PyPI)

python3.11 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install agent-search-core
python -c "import agent_search; print(agent_search.__file__)"

Quick start

from langchain_openai import ChatOpenAI
from agent_search import advanced_rag
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter

vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)

response = advanced_rag(
    "What is pgvector?",
    vector_store=vector_store,
    model=model,
)
print(response.output)

Prompt customization

Keep reusable prompt defaults in the existing config map, then override only the keys you need per run.

from copy import deepcopy

from langchain_openai import ChatOpenAI
from agent_search import advanced_rag
from agent_search.vectorstore.langchain_adapter import LangChainVectorStoreAdapter

vector_store = LangChainVectorStoreAdapter(your_langchain_vector_store)
model = ChatOpenAI(model="gpt-4.1-mini", temperature=0.0)

client_config = {
    "thread_id": "customer-42",
    "custom_prompts": {
        "subanswer": "Answer each sub-question with concise cited evidence only.",
        "synthesis": "Write a short final synthesis that preserves citation markers.",
    },
}

response = advanced_rag(
    "What changed in NATO maritime policy?",
    vector_store=vector_store,
    model=model,
    config=client_config,
)
print(response.output)

Per-run overrides should be merged into a fresh copy so one call does not mutate the reusable defaults for the next call.

run_config = deepcopy(client_config)
run_config["custom_prompts"] = {
    **run_config.get("custom_prompts", {}),
    "synthesis": "Return a two-paragraph answer and keep every citation marker.",
}

response = advanced_rag(
    "Summarize the policy shift for shipping operators.",
    vector_store=vector_store,
    model=model,
    config=run_config,
)

Merge and fallback behavior:

  • Built-in runtime defaults apply when custom_prompts is omitted.
  • Client-level config["custom_prompts"] replaces built-ins on a per-key basis.
  • Per-run merged values replace only the keys you override for that call.
  • Use custom_prompts in Python code; the supported keys are subanswer and synthesis.
  • Prompt overrides change generation instructions only. Citation validation and fallback behavior remain enforced in runtime code.

Requirements

  • Python >=3.11,<3.14
  • A compatible vector store and chat model as shown above.

Build

cd sdk/core
python -m build

Supported API

The supported callable exported by agent_search is:

  • advanced_rag

Notes about advanced_rag(...):

  • It is a synchronous call that runs the full retrieval-and-answer workflow and returns a RuntimeAgentRunResponse.
  • You supply the model and vector store; the SDK orchestrates the LangGraph-based runtime around them.
  • Optional config={"thread_id": "..."} lets you pass a stable execution identity into the run.
  • If you pass langfuse_callback=..., the SDK includes that callback in runtime tracing.
  • langfuse_settings is accepted for compatibility but ignored unless you provide an explicit langfuse_callback.

advanced_rag(...) output schema:

RuntimeAgentRunResponse(
  main_question: str,
  thread_id: str,
  sub_qa: list[SubQuestionAnswer],
  output: str,
  final_citations: list[CitationSourceRow],
)

Vector store compatibility

Runtime SDK expects similarity_search(query, k, filter=None). For LangChain-backed stores, use:

  • agent_search.vectorstore.langchain_adapter.LangChainVectorStoreAdapter

Notes

  • This package is the SDK surface only. For the full app experience, run the repository with Docker Compose.
  • The PyPI package is intentionally narrower than the backend internals; consumer integrations should rely on advanced_rag(...) only.
  • For SDK-only use, install from PyPI and supply your own model + vector store.

Release guidance

Use the repository release script from project root:

./scripts/release_sdk.sh

The release script verifies the built wheel includes the agent_search package before upload.

Publish flow (requires TWINE_API_TOKEN):

PUBLISH=1 TWINE_API_TOKEN=*** ./scripts/release_sdk.sh

Tag format used by CI release workflow:

  • agent-search-core-v<version>

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_search_core-1.0.3.tar.gz (64.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_search_core-1.0.3-py3-none-any.whl (90.8 kB view details)

Uploaded Python 3

File details

Details for the file agent_search_core-1.0.3.tar.gz.

File metadata

  • Download URL: agent_search_core-1.0.3.tar.gz
  • Upload date:
  • Size: 64.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.13

File hashes

Hashes for agent_search_core-1.0.3.tar.gz
Algorithm Hash digest
SHA256 c8c2f343b67d739bad5beaf9c222f5e14c7f59f34f52f15e5cd175c53ec6f8af
MD5 ad78e12f79c89e75451e38d9d1e825a0
BLAKE2b-256 bcbce1bc4b8179f47b02daf43c82e76d4099f1cee015c6546b4ca15ef428799e

See more details on using hashes here.

File details

Details for the file agent_search_core-1.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_search_core-1.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 625cf94a0f144d626a8b5db67e93757b5acbc653ad2f5530ae3dc0f36ef54005
MD5 144a7bdcfb0f0f1fee2311dc691f69e3
BLAKE2b-256 3ca5e5eb9fe287d6e607012d1b78faa54bb6ed5aae4b7e2a9f008ab537f34620

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page