RAG implementation for gptme context management
Project description
gptme-rag
RAG (Retrieval-Augmented Generation) implementation for gptme context management.
Features
- ๐ Document indexing with ChromaDB
- Fast and efficient vector storage
- Semantic search capabilities
- Persistent storage
- ๐ Semantic search with embeddings
- Relevance scoring
- Token-aware context assembly
- Clean output formatting
- ๐ Smart document processing
- Streaming large file handling
- Automatic document chunking
- Configurable chunk size/overlap
- Document reconstruction
- ๐ File watching and auto-indexing
- Real-time index updates
- Pattern-based file filtering
- Efficient batch processing
- Automatic persistence
- ๐ ๏ธ CLI interface for testing and development
- Index management
- Search functionality
- Context assembly
- File watching
Installation
# Using pip
pip install gptme-rag
# Using pipx (recommended for CLI tools)
pipx install gptme-rag
# From source (for development)
git clone https://github.com/ErikBjare/gptme-rag.git
cd gptme-rag
poetry install
After installation, the gptme-rag command will be available in your terminal.
Usage
Indexing Documents
# Index markdown files in a directory
poetry run python -m gptme_rag index /path/to/documents --pattern "**/*.md"
# Index with custom persist directory
poetry run python -m gptme_rag index /path/to/documents --persist-dir ./index
Searching
# Basic search
poetry run python -m gptme_rag search "your query here"
# Advanced search with options
poetry run python -m gptme_rag search "your query" \
--n-results 5 \
--persist-dir ./index \
--max-tokens 4000 \
--show-context
File Watching
The watch command monitors directories for changes and automatically updates the index:
# Watch a directory with default settings
poetry run python -m gptme_rag watch /path/to/documents
# Watch with custom pattern and ignore rules
poetry run python -m gptme_rag watch /path/to/documents \
--pattern "**/*.{md,py}" \
--ignore-patterns "*.tmp" "*.log" \
--persist-dir ./index
Features:
- ๐ Real-time index updates
- ๐ฏ Pattern matching for file types
- ๐ซ Configurable ignore patterns
- ๐ Efficient batch processing
- ๐พ Automatic persistence
The watcher will:
- Perform initial indexing of existing files
- Monitor for file changes (create/modify/delete/move)
- Update the index automatically
- Handle rapid changes efficiently with debouncing
- Continue running until interrupted (Ctrl+C)
Performance Benchmarking
The benchmark commands help measure and optimize performance:
# Benchmark document indexing
poetry run python -m gptme_rag benchmark indexing /path/to/documents \
--pattern "**/*.md" \
--persist-dir ./benchmark_index
# Benchmark search performance
poetry run python -m gptme_rag benchmark search /path/to/documents \
--queries "python" "documentation" "example" \
--n-results 10
# Benchmark file watching
poetry run python -m gptme_rag benchmark watch-perf /path/to/documents \
--duration 10 \
--updates-per-second 5
Features:
- ๐ Comprehensive metrics
- Operation duration
- Memory usage
- Throughput
- Custom metrics per operation
- ๐ฌ Multiple benchmark types
- Document indexing
- Search operations
- File watching
- ๐ Performance tracking
- Memory efficiency
- Processing speed
- System resource usage
Example benchmark output:
โโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโณโโโโโโโโโโโโณโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโ
โ Operation โ Duration(s) โ Memory(MB) โ Throughput โ Additional Metrics โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ indexing โ 0.523 โ 15.42 โ 19.12/s โ files: 10 โ
โ search โ 0.128 โ 5.67 โ 23.44/s โ queries: 3 โ
โ file_watching โ 5.012 โ 8.91 โ 4.99/s โ updates: 25 โ
โโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโ
Document Chunking
The indexer supports automatic document chunking for efficient processing of large files:
# Index with custom chunk settings
poetry run python -m gptme_rag index /path/to/documents \
--chunk-size 1000 \
--chunk-overlap 200
# Search with chunk grouping
poetry run python -m gptme_rag search "your query" \
--group-chunks \
--n-results 5
Features:
- ๐ Streaming processing
- Handles large files efficiently
- Minimal memory usage
- Progress reporting
- ๐ Smart chunking
- Configurable chunk size
- Overlapping chunks for context
- Token-aware splitting
- ๐ Enhanced search
- Chunk-aware relevance
- Result grouping by document
- Full document reconstruction
Example Output:
Most Relevant Documents:
1. documentation.md#chunk2 (relevance: 0.85)
Detailed section about configuration options, including chunk size and overlap settings.
[Part of: documentation.md]
2. guide.md#chunk5 (relevance: 0.78)
Example usage showing how to process large documents efficiently.
[Part of: guide.md]
3. README.md#chunk1 (relevance: 0.72)
Overview of the chunking system and its benefits for large document processing.
[Part of: README.md]
Full Context:
Total tokens: 850
Documents included: 3 (from 3 source documents)
Truncated: False
The chunking system automatically:
- Splits large documents into manageable pieces
- Maintains context across chunk boundaries
- Groups related chunks in search results
- Provides document reconstruction when needed
Development
Running Tests
# Run all tests
poetry run pytest
# Run with coverage
poetry run pytest --cov=gptme_rag
Project Structure
gptme_rag/
โโโ __init__.py
โโโ cli.py # CLI interface
โโโ indexing/ # Document indexing
โ โโโ document.py # Document model
โ โโโ indexer.py # ChromaDB integration
โโโ query/ # Search functionality
โ โโโ context_assembler.py # Context assembly
โโโ utils/ # Utility functions
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run tests and linting
- Submit a pull request
Releases
Releases are automated through GitHub Actions. The process is:
- Update version in pyproject.toml
- Commit the change:
git commit -am "chore: bump version to x.y.z" - Create and push a tag:
git tag vx.y.z && git push origin master vx.y.z - Create a GitHub release (can be done with
gh release create vx.y.z) - The publish workflow will automatically:
- Run tests
- Build the package
- Publish to PyPI
Integration with gptme
This package is designed to integrate with gptme as a plugin, providing:
- Automatic context enhancement
- Semantic search across project files
- Knowledge base integration
- Smart context assembly
License
MIT License. See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gptme_rag-0.1.5.tar.gz.
File metadata
- Download URL: gptme_rag-0.1.5.tar.gz
- Upload date:
- Size: 20.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.10.12 Linux/6.5.0-1025-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a93f5643556e5b0ded10cad81e80affe87c24ed92cc31d7ac260e96285b0e22e
|
|
| MD5 |
2e0fca2bb722d4da7b5ff739ffa74b0e
|
|
| BLAKE2b-256 |
7e9024141b59d826cc2f9a35cc396381771e135dfd7940f1fd5e9a7a556fe6b4
|
File details
Details for the file gptme_rag-0.1.5-py3-none-any.whl.
File metadata
- Download URL: gptme_rag-0.1.5-py3-none-any.whl
- Upload date:
- Size: 22.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.10.12 Linux/6.5.0-1025-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0d93a13d8ffb3d1c4e9938c7c4469e0907c2c51d048816ba8b9a9e2b6ef5c5b1
|
|
| MD5 |
b37e4b79aeaa95eb436c6e58ebb3b821
|
|
| BLAKE2b-256 |
cadc8ec55dc0a9be61c3aca7dfdf330491a439ab4343436929c0a2e58cf3c098
|