Skip to main content

A powerful, modular RAG-orchestrator that aggregates 10+ search engines (Google, Bing, Firecrawl, Exa, Tavily) into LLM-ready markdown.

Project description

🦙 llama-searcher

llama-searcher logo

Python Version License: MIT CI

llama-searcher is a professional-grade search orchestration framework designed for AI Agents and RAG (Retrieval-Augmented Generation) pipelines. It unifies traditional SEO/SERP APIs with modern neural search engines, transforming raw web data into clean, LLM-ready markdown.


🚀 Key Features

  • Deca-Engine Support: Seamlessly switch between or aggregate results from 10+ providers:
    • Neural AI Search: Firecrawl, Exa (Metaphor), Tavily, Perplexity Sonar.
    • Traditional SERP: Google Custom Search, Bing, SerpApi, Serper.dev, Brave Search, Zenserp.
  • RAG-Ready Output: Automatically cleans HTML, removes boilerplate (nav, footers, scripts), and returns structured markdown optimized for context windows.
  • Smart Orchestration: Concurrent search execution, link deduplication, and intelligent content merging.
  • Event Extraction: Built-in logic for parsing sports, calendars, and community events from structured search metadata.
  • Professional Architecture: Production-ready modular design with standardized logging, error handling, and dynaconf configuration management.

📚 Documentation

For a deep dive into the project structure, configuration, and how to extend it, check out our Tutorial.

🛠️ Tech Stack

  • Core: Python 3.10+, Asynchronous execution with asyncio & aiohttp.
  • Scraping: Headless Playwright (for dynamic content) & httpx (for performance).
  • AI/LLM: LangChain, OpenAI/Gemini compatible API integration, Tiktoken for token optimization.
  • Storage: In-memory vector stores and RAG-based similarity search.

🚦 Quick Start

1. Configuration

The project uses dynaconf. Populate your API keys in .secrets.toml:

[default]
GOOGLE_API_KEY = "your_key"
CSE_ID = "your_cse_id"
BING_API_KEY = "your_key"
FIRECRAWL_API_KEY = "your_key"
# ... add other keys as needed

2. Basic Usage

from llama_searcher.api.search import get_events

# Use multiple engines at once
result = get_events(
    search_qeury="Music festivals in Europe 2024",
    engine="google,exa,tavily"
)
print(result)

3. Running the API

uv run python -m llama_searcher.api.app

4. Running the MCP Server (for AI Agents)

uv run python -m llama_searcher.mcp_server

🏗️ Architecture

├── agents/             # LLM Analysis & Summarization Agents
├── api/                # Unified Search Entry point
├── core/               # Fetchers, Cleaners, RAG, and SearchProviders
├── services/           # Orchestration & Domain services
├── utils/              # Configuration (Dynaconf) & Logging
└── main.py             # Entry point

Created with ❤️ for the Advanced Agentic Coding community.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_searcher-0.1.0.tar.gz (18.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_searcher-0.1.0-py3-none-any.whl (19.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_searcher-0.1.0.tar.gz.

File metadata

  • Download URL: llama_searcher-0.1.0.tar.gz
  • Upload date:
  • Size: 18.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llama_searcher-0.1.0.tar.gz
Algorithm Hash digest
SHA256 d3fdb53aafec5211893704c0e5603f27f4b6d3c1ffe8f1bdc0edeeedf94fab47
MD5 1142dfcf5d7b28f4bb63fbb3574ef536
BLAKE2b-256 0d12fc0f74b217b03cd2ffee82dacd714999914507763f2351ef067676683411

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_searcher-0.1.0.tar.gz:

Publisher: publish.yml on mohamed-em2m/llama-searcher

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llama_searcher-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: llama_searcher-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 19.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llama_searcher-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5cf39881498bae91b51485ddda69e61e54d614860c3e4169690aef06adabb5ea
MD5 cfc8b19091ee22f27ed84f2695e46c57
BLAKE2b-256 61cf050baeb5628ddc6d308f2627fe1a5540852730d4ea5158fc49cb3a4ab27a

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_searcher-0.1.0-py3-none-any.whl:

Publisher: publish.yml on mohamed-em2m/llama-searcher

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page