Skip to main content

RAG implementation for gptme context management

Project description

gptme-rag

RAG (Retrieval-Augmented Generation) implementation for gptme context management.

Tests PyPI version License

Features

  • ๐Ÿ“š Document indexing with ChromaDB
    • Fast and efficient vector storage
    • Semantic search capabilities
    • Persistent storage
  • ๐Ÿ” Semantic search with embeddings
    • Relevance scoring
    • Token-aware context assembly
    • Clean output formatting
  • ๐Ÿ“„ Smart document processing
    • Streaming large file handling
    • Automatic document chunking
    • Configurable chunk size/overlap
    • Document reconstruction
  • ๐Ÿ‘€ File watching and auto-indexing
    • Real-time index updates
    • Pattern-based file filtering
    • Efficient batch processing
    • Automatic persistence
  • ๐Ÿ› ๏ธ CLI interface for testing and development
    • Index management
    • Search functionality
    • Context assembly
    • File watching

Installation

# Using pip
pip install gptme-rag

# Using pipx (recommended for CLI tools)
pipx install gptme-rag

# From source (for development)
git clone https://github.com/ErikBjare/gptme-rag.git
cd gptme-rag
poetry install

After installation, the gptme-rag command will be available in your terminal.

Usage

Indexing Documents

# Index markdown files in a directory
poetry run python -m gptme_rag index /path/to/documents --pattern "**/*.md"

# Index with custom persist directory
poetry run python -m gptme_rag index /path/to/documents --persist-dir ./index

Searching

# Basic search
poetry run python -m gptme_rag search "your query here"

# Advanced search with options
poetry run python -m gptme_rag search "your query" \
  --n-results 5 \
  --persist-dir ./index \
  --max-tokens 4000 \
  --show-context

File Watching

The watch command monitors directories for changes and automatically updates the index:

# Watch a directory with default settings
poetry run python -m gptme_rag watch /path/to/documents

# Watch with custom pattern and ignore rules
poetry run python -m gptme_rag watch /path/to/documents \
  --pattern "**/*.{md,py}" \
  --ignore-patterns "*.tmp" "*.log" \
  --persist-dir ./index

Features:

  • ๐Ÿ”„ Real-time index updates
  • ๐ŸŽฏ Pattern matching for file types
  • ๐Ÿšซ Configurable ignore patterns
  • ๐Ÿ”‹ Efficient batch processing
  • ๐Ÿ’พ Automatic persistence

The watcher will:

  • Perform initial indexing of existing files
  • Monitor for file changes (create/modify/delete/move)
  • Update the index automatically
  • Handle rapid changes efficiently with debouncing
  • Continue running until interrupted (Ctrl+C)

Performance Benchmarking

The benchmark commands help measure and optimize performance:

# Benchmark document indexing
poetry run python -m gptme_rag benchmark indexing /path/to/documents \
  --pattern "**/*.md" \
  --persist-dir ./benchmark_index

# Benchmark search performance
poetry run python -m gptme_rag benchmark search /path/to/documents \
  --queries "python" "documentation" "example" \
  --n-results 10

# Benchmark file watching
poetry run python -m gptme_rag benchmark watch-perf /path/to/documents \
  --duration 10 \
  --updates-per-second 5

Features:

  • ๐Ÿ“Š Comprehensive metrics
    • Operation duration
    • Memory usage
    • Throughput
    • Custom metrics per operation
  • ๐Ÿ”ฌ Multiple benchmark types
    • Document indexing
    • Search operations
    • File watching
  • ๐Ÿ“ˆ Performance tracking
    • Memory efficiency
    • Processing speed
    • System resource usage

Example benchmark output:

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ Operation      โ”ƒ Duration(s) โ”ƒ Memory(MB) โ”ƒ Throughput โ”ƒ Additional Metrics โ”ƒ
โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚ indexing       โ”‚      0.523 โ”‚     15.42 โ”‚   19.12/s โ”‚ files: 10         โ”‚
โ”‚ search         โ”‚      0.128 โ”‚      5.67 โ”‚   23.44/s โ”‚ queries: 3        โ”‚
โ”‚ file_watching  โ”‚      5.012 โ”‚      8.91 โ”‚    4.99/s โ”‚ updates: 25       โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Document Chunking

The indexer supports automatic document chunking for efficient processing of large files:

# Index with custom chunk settings
poetry run python -m gptme_rag index /path/to/documents \
  --chunk-size 1000 \
  --chunk-overlap 200

# Search with chunk grouping
poetry run python -m gptme_rag search "your query" \
  --group-chunks \
  --n-results 5

Features:

  • ๐Ÿ”„ Streaming processing
    • Handles large files efficiently
    • Minimal memory usage
    • Progress reporting
  • ๐Ÿ“‘ Smart chunking
    • Configurable chunk size
    • Overlapping chunks for context
    • Token-aware splitting
  • ๐Ÿ” Enhanced search
    • Chunk-aware relevance
    • Result grouping by document
    • Full document reconstruction

Example Output:

Most Relevant Documents:

1. documentation.md#chunk2 (relevance: 0.85)
  Detailed section about configuration options, including chunk size and overlap settings.
  [Part of: documentation.md]

2. guide.md#chunk5 (relevance: 0.78)
  Example usage showing how to process large documents efficiently.
  [Part of: guide.md]

3. README.md#chunk1 (relevance: 0.72)
  Overview of the chunking system and its benefits for large document processing.
  [Part of: README.md]

Full Context:
Total tokens: 850
Documents included: 3 (from 3 source documents)
Truncated: False

The chunking system automatically:

  • Splits large documents into manageable pieces
  • Maintains context across chunk boundaries
  • Groups related chunks in search results
  • Provides document reconstruction when needed

Development

Running Tests

# Run all tests
poetry run pytest

# Run with coverage
poetry run pytest --cov=gptme_rag

Project Structure

gptme_rag/
โ”œโ”€โ”€ __init__.py
โ”œโ”€โ”€ cli.py               # CLI interface
โ”œโ”€โ”€ indexing/           # Document indexing
โ”‚   โ”œโ”€โ”€ document.py    # Document model
โ”‚   โ””โ”€โ”€ indexer.py     # ChromaDB integration
โ”œโ”€โ”€ query/             # Search functionality
โ”‚   โ””โ”€โ”€ context_assembler.py  # Context assembly
โ””โ”€โ”€ utils/             # Utility functions

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests for new functionality
  5. Run tests and linting
  6. Submit a pull request

Integration with gptme

This package is designed to integrate with gptme as a plugin, providing:

  • Automatic context enhancement
  • Semantic search across project files
  • Knowledge base integration
  • Smart context assembly

License

MIT License. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gptme_rag-0.1.2.tar.gz (20.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gptme_rag-0.1.2-py3-none-any.whl (22.4 kB view details)

Uploaded Python 3

File details

Details for the file gptme_rag-0.1.2.tar.gz.

File metadata

  • Download URL: gptme_rag-0.1.2.tar.gz
  • Upload date:
  • Size: 20.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.12.7 Linux/6.5.0-1025-azure

File hashes

Hashes for gptme_rag-0.1.2.tar.gz
Algorithm Hash digest
SHA256 ca9edcb7edeae501c5ccd0adf5868f9d20b579d29c37a57661f666a020442310
MD5 14cd1edadb9bf7f31854b587949f6ebd
BLAKE2b-256 852532caded957c0fd0eb6f839c254f88b34b82aa6750ab9d9feb94d5b2cf3b2

See more details on using hashes here.

File details

Details for the file gptme_rag-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: gptme_rag-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 22.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.12.7 Linux/6.5.0-1025-azure

File hashes

Hashes for gptme_rag-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 784773426eabb97c7c1b6ad3e991d4ddb51132b2c9d07a6016460652c7076e6e
MD5 01b066c40a67dd9c62ece4d0ae5c5095
BLAKE2b-256 f54a1a24a5806a17adaa5c9b89e64188c963fe2849fc6cd64e4644dbe159e6ce

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page