Skip to main content

FLAMEHAVEN FileSearch - Open source semantic document search with multi-provider LLM support (Gemini, OpenAI, Claude, Ollama)

Project description

FLAMEHAVEN FileSearch

FLAMEHAVEN FileSearch

Self-hosted RAG search engine. Production-ready in 3 minutes.

License: MIT Version Python Docker

Quick StartFeaturesDocumentationAPI ReferenceContributing


🎯 Why FLAMEHAVEN?

Stop sending your sensitive documents to third-party services. Get enterprise-grade semantic search running locally in minutes, not days.

# One command. Three minutes. Done.
docker run -d -p 8000:8000 -e GEMINI_API_KEY="your_key" flamehaven-filesearch:1.5.2

🚀 Fast

Production deployment in 3 minutes
Vector generation in <1ms
Zero ML dependencies

🔒 Private

100% self-hosted
Your data never leaves your infrastructure
Enterprise-grade security

💰 Cost-Effective

Free tier: 1,500 queries/month
No infrastructure costs
Open source & MIT licensed


Features ✨

Core Capabilities

Capability Detail
Search Modes Keyword, semantic, and hybrid with automatic typo correction
34 File Formats PDF, DOCX/DOC, XLSX, PPTX, RTF, HTML, CSV, LaTeX, WebVTT, images + plain text — see Document Parsing
RAG Pipeline Structure-aware chunking, sliding-window context enrichment, mtime parse cache
Ultra-Fast Vectors DSP v2.0 generates embeddings in <1ms — no ML frameworks required
Source Attribution Every answer links back to the originating document and chunk
Framework SDKs LangChain, LlamaIndex, Haystack, CrewAI adapters out of the box
Enterprise Auth API key hashing (SHA256+salt), OAuth2/OIDC, fine-grained permissions
Admin Dashboard Real-time metrics, quota management, batch processing (1–100 queries)
Flexible Storage SQLite (default) · PostgreSQL + pgvector · Redis cache (optional)

What changed in each release? See CHANGELOG.md for the full version history.


Quick Start 🚀

Option 1: Docker (Recommended)

The fastest path to production:

docker run -d \
  -p 8000:8000 \
  -e GEMINI_API_KEY="your_gemini_api_key" \
  -e FLAMEHAVEN_ADMIN_KEY="secure_admin_password" \
  -v $(pwd)/data:/app/data \
  flamehaven-filesearch:1.5.2

✅ Server running at http://localhost:8000

Option 2: Python SDK

Perfect for integrating into existing applications:

from flamehaven_filesearch import FlamehavenFileSearch, FileSearchConfig

# Initialize
config = FileSearchConfig(google_api_key="your_gemini_key")
fs = FlamehavenFileSearch(config)

# Upload and search
fs.upload_file("company_handbook.pdf", store="docs")
result = fs.search("What is our remote work policy?", store="docs")

print(result['answer'])
# Output: "Employees can work remotely up to 3 days per week..."

Option 3: REST API

For language-agnostic integration:

# 1. Generate API key
curl -X POST http://localhost:8000/api/admin/keys \
  -H "X-Admin-Key: your_admin_key" \
  -d '{"name":"production","permissions":["upload","search"]}'

# 2. Upload document
curl -X POST http://localhost:8000/api/upload/single \
  -H "Authorization: Bearer sk_live_abc123..." \
  -F "file=@document.pdf" \
  -F "store=my_docs"

# 3. Search
curl -X POST http://localhost:8000/api/search \
  -H "Authorization: Bearer sk_live_abc123..." \
  -H "Content-Type: application/json" \
  -d 
  '{ 
    "query": "What are the main findings?",
    "store": "my_docs",
    "search_mode": "hybrid"
  }'

📦 Installation

# Core package (HTML, CSV, LaTeX, WebVTT, plain-text parsing included — zero extra deps)
pip install flamehaven-filesearch

# + Document parsers: PDF (pymupdf/pypdf), DOCX, XLSX, PPTX, RTF
pip install flamehaven-filesearch[parsers]

# + Image OCR (Pillow + pytesseract; requires Tesseract system binary)
pip install flamehaven-filesearch[vision]

# + Google Gemini API
pip install flamehaven-filesearch[google]

# + REST API server (FastAPI + uvicorn)
pip install flamehaven-filesearch[api]

# + HNSW vector index
pip install flamehaven-filesearch[vector]

# + PostgreSQL backend
pip install flamehaven-filesearch[postgres]

# Everything
pip install flamehaven-filesearch[all]

# Build from source
git clone https://github.com/flamehaven01/Flamehaven-Filesearch.git
cd Flamehaven-Filesearch
docker build -t flamehaven-filesearch:1.5.2 .

Framework Integrations

Framework SDKs (LangChain, LlamaIndex, etc.) are imported lazily — install only what you need:

# LangChain  (pip install langchain-core)
from flamehaven_filesearch.integrations import FlamehavenLangChainLoader
docs = FlamehavenLangChainLoader("report.pdf", chunk=True).load()

# LlamaIndex  (pip install llama-index-core)
from flamehaven_filesearch.integrations import FlamehavenLlamaIndexReader
nodes = FlamehavenLlamaIndexReader(chunk=True).load_data(["report.pdf", "slides.pptx"])

# Haystack  (pip install haystack-ai)
from flamehaven_filesearch.integrations import FlamehavenHaystackConverter
result = FlamehavenHaystackConverter().run(sources=["report.pdf"])

# CrewAI  (pip install crewai)
from flamehaven_filesearch.integrations import FlamehavenCrewAITool
tool = FlamehavenCrewAITool()           # pass to your agent's tools list

Configuration ⚙️

Required Environment Variables

export GEMINI_API_KEY="your_google_gemini_api_key"
export FLAMEHAVEN_ADMIN_KEY="your_secure_admin_password"

Optional Configuration

export HOST="0.0.0.0"              # Bind address
export PORT="8000"                  # Server port
export REDIS_HOST="localhost"       # Distributed caching
export REDIS_PORT="6379"            # Redis port

Advanced Configuration

Create a config.yaml for fine-tuned control:

vector_store:
  quantization: int8
  compression: gravitas_pack
  
search:
  default_mode: hybrid
  typo_correction: true
  max_results: 10
  
security:
  rate_limit: 100  # requests per minute
  max_file_size: 52428800  # 50MB

📊 Performance

Metric Value Notes
Vector Generation <1ms DSP v2.0, zero ML dependencies
Memory Footprint 75% reduced Int8 quantization vs float32
Metadata Size 90% smaller Gravitas-Pack compression
Test Suite 443 tests All passing (pytest)
Cold Start 3 seconds Docker container ready

Real-World Benchmarks

Environment: Docker on Apple M1 Mac, 16GB RAM
Document Set: 500 PDFs, ~2GB total

Health Check:           8ms
Search (cache hit):     9ms
Search (cache miss):    1,250ms  (includes Gemini API call)
Batch Search (10):      2,500ms  (parallel processing)
Upload (50MB file):     3,200ms  (with indexing)

Architecture 🏗️

flowchart TD
    Client(["Client\n(HTTP / SDK)"])

    subgraph API["REST API Layer (FastAPI)"]
        Upload["/api/upload"]
        Search["/api/search"]
        Admin["/api/admin"]
    end

    subgraph Engine["Engine Layer"]
        FP["FileParser\n+ BackendRegistry\n(34 formats)"]
        Cache["ParseCache\n(mtime-based)"]
        Chunker["TextChunker\n+ ContextExtractor"]
        DSP["DSP v2.0\nEmbedding Generator\n(&lt;1ms, zero-ML)"]
        Scorer["SemanticScorer\n+ TypoCorrector"]
    end

    subgraph Storage["Storage Layer"]
        SQLite[("SQLite\nMetadata Store")]
        Vec[("Vector Store\n(local / pgvector)")]
        Redis[("Redis Cache\n(optional)")]
    end

    Gemini["Google Gemini API\n(reasoning)"]
    Metrics["Metrics Logger"]

    Client --> Upload & Search & Admin
    Upload --> FP
    FP <-->|"cache hit/miss"| Cache
    FP --> Chunker
    Chunker --> DSP
    DSP --> Vec
    FP --> SQLite

    Search --> Scorer
    Scorer --> DSP
    DSP --> Vec
    Scorer --> Gemini
    Gemini --> Client

    Admin --> Metrics
    Admin --> SQLite
    Storage <-->|"read / write"| Redis

Full layer detail: Architecture.md


Security 🔒

FLAMEHAVEN takes security seriously:

  • API Key Hashing - SHA256 with salt
  • Rate Limiting - Per-key quotas (default: 100/min)
  • Permission System - Granular access control
  • Audit Logging - Complete request history
  • OWASP Headers - Security headers enabled by default
  • Input Validation - Strict file type and size checks

Security Best Practices

# Use strong admin keys
export FLAMEHAVEN_ADMIN_KEY=$(openssl rand -base64 32)

# Enable HTTPS in production
# (use nginx/traefik as reverse proxy)

# Rotate API keys regularly
curl -X DELETE http://localhost:8000/api/admin/keys/old_key_id \
  -H "X-Admin-Key: $FLAMEHAVEN_ADMIN_KEY"

Roadmap 🗺️

Full roadmap: ROADMAP.md

v1.4.x (Completed)

  • Multimodal search (image + text)
  • HNSW vector indexing for faster search
  • OAuth2/OIDC integration
  • PostgreSQL backend (metadata + pgvector)
  • Usage-budget controls and reporting
  • pgvector tuning and reliability hardening
  • CI/CD — ruff replaces flake8; pipelines fully green

v1.5.x (Completed)

  • Universal Document Parser — 34 formats, zero doc-AI dependency (v1.5.0)
  • Internal text chunker — structure-aware + token-aware, zero ML deps (v1.5.0)
  • Framework integrations — LangChain, LlamaIndex, Haystack, CrewAI (v1.5.0)
  • Backend Plugin Architecture — AbstractFormatBackend + BackendRegistry (v1.5.2)
  • Parse cache — mtime-based, extract_text(use_cache=True) (v1.5.2)
  • ContextExtractor — sliding-window RAG chunk enrichment (v1.5.2)
  • 443 tests; AI-Slop-Detector critical deficits: 0 (v1.5.2)

v2.0.0 (Q3 2026)

  • Multi-language support (15+ languages) — multilingual stopwords + jieba
  • Kubernetes Helm charts
  • Distributed indexing

Troubleshooting 🐛

❌ 401 Unauthorized Error

Problem: API returns 401 when making requests.

Solutions:

  1. Verify FLAMEHAVEN_ADMIN_KEY environment variable is set
  2. Check Authorization: Bearer sk_live_... header format
  3. Ensure API key hasn't expired (check admin dashboard)
# Debug: Check if admin key is set
echo $FLAMEHAVEN_ADMIN_KEY

# Regenerate API key
curl -X POST http://localhost:8000/api/admin/keys \
  -H "X-Admin-Key: $FLAMEHAVEN_ADMIN_KEY" \
  -d '{"name":"debug","permissions":["search"]}'
🐌 Slow Search Performance

Problem: Searches taking >5 seconds.

Solutions:

  1. Check cache hit rate: FLAMEHAVEN_METRICS_ENABLED=1 curl http://localhost:8000/metrics
  2. Enable Redis for distributed caching
  3. Verify Gemini API latency (should be <1.5s)
# Enable Redis caching
docker run -d --name redis redis:7-alpine
export REDIS_HOST=localhost
💾 High Memory Usage

Problem: Container using >2GB RAM.

Solutions:

  1. Enable Redis with LRU eviction policy
  2. Reduce max file size in config
  3. Monitor with Prometheus endpoint
# Configure Redis memory limit
docker run -d \
  -p 6379:6379 \
  redis:7-alpine \
  --maxmemory 512mb \
  --maxmemory-policy allkeys-lru

More solutions in our Wiki Troubleshooting Guide.


Documentation 📚

Documentation Hub

Use the links below to jump to the most relevant guide.

Topic Description
Document Parsing Supported formats, internal parsers, RAG chunking
Framework Integrations LangChain, LlamaIndex, Haystack, CrewAI adapters
API Reference REST endpoints, payloads, rate limits
Architecture How all layers fit together (v1.5.2)
Configuration Reference Full list of environment variables and config fields
Production Deployment Docker, systemd, reverse proxy, scaling tips
Troubleshooting Step-by-step debugging playbook
Benchmarks Performance measurements and methodology

These Markdown files live inside the repository so they stay versioned alongside the code. Feel free to contribute improvements via pull requests.

Additional Resources


Contributing 🤝

We love contributions! FLAMEHAVEN is better because of developers like you.

Good First Issues

  • 🟢 [Easy] Add dark mode to admin dashboard (1-2 hours)
  • 🟡 [Medium] PostgreSQL backend for usage tracker (multi-instance deployments)
  • 🔴 [Advanced] Kubernetes Helm charts for production deployment

See CONTRIBUTING.md for development setup and guidelines.

Contributors


Community & Support 💬


License 📄

Distributed under the MIT License. See LICENSE for more information.


🙏 Acknowledgments

Built with amazing open source tools:

  • FastAPI - Modern Python web framework
  • Google Gemini - Semantic understanding and reasoning
  • SQLite - Lightweight, embedded database
  • Redis - In-memory caching (optional)

⭐ Star us on GitHub📖 Read the Docs🚀 Deploy Now

Built with 🔥 by the Flamehaven Core Team

Last updated: April 19, 2026 • Version 1.5.3

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flamehaven_filesearch-1.5.3.tar.gz (117.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flamehaven_filesearch-1.5.3-py3-none-any.whl (126.6 kB view details)

Uploaded Python 3

File details

Details for the file flamehaven_filesearch-1.5.3.tar.gz.

File metadata

  • Download URL: flamehaven_filesearch-1.5.3.tar.gz
  • Upload date:
  • Size: 117.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for flamehaven_filesearch-1.5.3.tar.gz
Algorithm Hash digest
SHA256 7987e3975da104687073029ea38fbeb223c57ea26ef24e133941d7f83293ab0e
MD5 ccf730819ea3c3003feec6c2e87551d5
BLAKE2b-256 04c83abed8d5a7ceb22201d719531a847a150d13a8d1dbcc0b21fe99c0a62e86

See more details on using hashes here.

Provenance

The following attestation bundles were made for flamehaven_filesearch-1.5.3.tar.gz:

Publisher: publish.yml on flamehaven01/Flamehaven-Filesearch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file flamehaven_filesearch-1.5.3-py3-none-any.whl.

File metadata

File hashes

Hashes for flamehaven_filesearch-1.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 bd432d292a4477731f421a33bac2b5fafc45c0f7a0dd65371c80080fd61f0aa5
MD5 14fbb4bdef0fe1f3bda54ddac9c814d3
BLAKE2b-256 589f1b11d3353edc7617e63b3fba58b8bb10c6f30e27e3e052edfaeb4fc096cd

See more details on using hashes here.

Provenance

The following attestation bundles were made for flamehaven_filesearch-1.5.3-py3-none-any.whl:

Publisher: publish.yml on flamehaven01/Flamehaven-Filesearch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page