Skip to main content

Python SDK for BrowseAI Dev — reliable research infrastructure for AI agents

Project description

browseai

Reliable research infrastructure for AI agents. Python SDK for BrowseAI Dev — the research layer for LangChain, CrewAI, and custom agent pipelines.

Install

pip install browseai

Quick Start

from browseai import BrowseAI

client = BrowseAI(api_key="bai_xxx")

# Research with citations
result = client.ask("What is quantum computing?")
print(result.answer)
print(f"Confidence: {result.confidence:.0%}")
for source in result.sources:
    print(f"  - {source.title}: {source.url}")

# Thorough mode — auto-retries if confidence < 60%
deep = client.ask("What is quantum computing?", depth="thorough")

# Web search
results = client.search("latest AI news", limit=5)

# Page extraction
page = client.open("https://example.com")

# Structured extraction from a URL
extract = client.extract("https://example.com", query="pricing info")

# Compare raw LLM vs evidence-backed
compare = client.compare("Is Python faster than Rust?")

# Submit feedback to improve accuracy
client.feedback(result_id=result.share_id, rating="good")

Async

from browseai import AsyncBrowseAI

async with AsyncBrowseAI(api_key="bai_xxx") as client:
    result = await client.ask("What is quantum computing?")
    # Thorough mode works with async too
    deep = await client.ask("What is quantum computing?", depth="thorough")

Streaming (REST API)

For real-time progress events, use the streaming endpoint directly:

import httpx

with httpx.stream("POST", "https://browseai.dev/api/browse/answer/stream",
    json={"query": "What is quantum computing?"},
    headers={"X-Tavily-Key": "tvly-xxx", "X-OpenRouter-Key": "sk-or-xxx"}
) as response:
    for line in response.iter_lines():
        if line.startswith("data: "):
            print(line[6:])

Events: trace (progress), sources (discovered early), result (final answer), done.

Research Memory (Sessions)

Persistent research sessions that accumulate knowledge across multiple queries. Later queries recall prior knowledge — faster, cheaper, more coherent.

Sessions require a BrowseAI API key (api_key="bai_xxx") for identity and ownership. BYOK clients (tavily_key/openrouter_key only) can use search/answer but cannot create or access sessions. Get a free API key at browseai.dev/dashboard.

from browseai import BrowseAI

client = BrowseAI(api_key="bai_xxx")

# Create a session
session = client.session("wasm-research")

# Each query builds on previous knowledge
r1 = session.ask("What is WebAssembly?")
r2 = session.ask("How does WASM compare to JavaScript performance?")
# ^ r2 recalls WASM knowledge from r1, only searches for JS perf

# Query accumulated knowledge without new searches
recalled = session.recall("WASM")
for entry in recalled.entries:
    print(f"  {entry.claim} (from: {entry.origin_query})")

# Export all knowledge
knowledge = session.knowledge()

# Delete a session
session.delete()

# List all your sessions
sessions = client.list_sessions()

# Resume an existing session by ID
session = client.get_session("session-id-here")

# Share with other agents
share = session.share()
print(share.url)  # https://browseai.dev/session/share/abc123def456

# Another agent forks and continues the research
forked = client.fork_session(share.share_id)

Async sessions work the same way:

async with AsyncBrowseAI(api_key="bai_xxx") as client:
    session = await client.session("my-project")
    r1 = await session.ask("What is WASM?")
    r2 = await session.ask("WASM vs JS?")

    # Share and fork work async too
    share = await session.share()
    forked = await client.fork_session(share.share_id)

BYOK (Bring Your Own Keys)

client = BrowseAI(tavily_key="tvly-xxx", openrouter_key="sk-or-xxx")

LangChain

pip install browseai[langchain]
from browseai.integrations.langchain import BrowseAIAskTool

tools = [BrowseAIAskTool(api_key="bai_xxx")]

CrewAI

pip install browseai[crewai]
from browseai.integrations.crewai import BrowseAITool

researcher = Agent(tools=[BrowseAITool(api_key="bai_xxx")])

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

browseai-0.1.4.tar.gz (9.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

browseai-0.1.4-py3-none-any.whl (12.0 kB view details)

Uploaded Python 3

File details

Details for the file browseai-0.1.4.tar.gz.

File metadata

  • Download URL: browseai-0.1.4.tar.gz
  • Upload date:
  • Size: 9.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for browseai-0.1.4.tar.gz
Algorithm Hash digest
SHA256 d427cafcc3b1d4039ec3494db13993337b7571e451ecacdbda1e17d6aedd075b
MD5 f991ffd41d8d29b7bdf8b8fcf2ae37f3
BLAKE2b-256 048e81ac6565e0ee9a34ff2319cb57a126ba04a12ca3039ec624127b183072fe

See more details on using hashes here.

File details

Details for the file browseai-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: browseai-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 12.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for browseai-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 005b03c66d382a7dbb125c0f36b4b19d860fddb5c0d030c82c67c5ceae5de0a7
MD5 0670793973fd273821c7630bf9e55aac
BLAKE2b-256 12576228e6aaf19aa77f07f534c251290099ceca97eae9687fd7d481e15a8621

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page