Offline codebase wisdom and seek tool powered by local LLMs
Project description
๐งโโ๏ธ Yoda CLI
A powerful offline CLI tool for codebase analysis and intelligent chat powered by local LLMs.
โจ Features
Yoda CLI enables you to:
- ๐ Generate Wisdom: Automatically create comprehensive WISDOM.md documentation with architecture diagrams, dependencies, and project insights
- ๐ฌ Chat with your code: Ask questions about your codebase using RAG (Retrieval-Augmented Generation) for context-aware answers
- ๐ Work completely offline: Uses Ollama for local LLM inference, keeping your code private and secure
- ๐ Semantic search: Builds vector indexes using Faiss and LlamaIndex for fast, relevant code retrieval
- ๐จ Real-time streaming: Get answers with syntax-highlighted streaming output
- ๐ Model switching: Choose between different code models (CodeLlama, StarCoder, DeepSeek Coder)
Installation
Quick Install
- Install Python 3.9+ (if not already installed):
macOS:
brew install python@3.11
Linux:
sudo apt-get install python3.11 python3.11-venv # Ubuntu/Debian
# or
sudo yum install python3.11 # RedHat/CentOS
- Install Yoda CLI:
# Clone or navigate to the yoda directory
cd yoda
# Create and activate a virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install Yoda
pip install -e .
That's it! Yoda will automatically:
- โ Install Ollama if not present (macOS/Linux)
- โ Start the Ollama service when needed
- โ Download required models on first use
First Run
When you run any Yoda command for the first time, it will automatically:
- Check if Ollama is installed (install if missing on macOS/Linux)
- Start the Ollama service if not running
- Download the required model (codellama:7b by default)
Note for Windows users: Please install Ollama manually from https://ollama.com before using Yoda.
๐ Usage
Initialize a codebase
Index your codebase for wisdom generation and chat:
# Initialize current directory
yoda init
# Or specify a path and model
yoda init /path/to/project --model codellama:7b
This will:
- Parse source files using tree-sitter
- Build a semantic vector index with Faiss
- Download and configure the LLM model
Generate WISDOM documentation
Create comprehensive project documentation with architecture diagrams:
yoda wisdom
This generates a WISDOM.md file with:
- ๐ Detailed project overview (3-4 paragraphs)
- ๐ ๏ธ Technology stack analysis with percentages
- ๐๏ธ Architecture description with Mermaid diagrams
- ๐ฆ Dependency analysis (core & development)
- ๐ Annotated directory structure
- ๐ Getting started guide
- ๐ก Core modules breakdown
- ๐ Data flow visualization
- ๐ฏ Key features
- ๐ Configuration & environment setup
- ๐งช Testing architecture
- ๐ Performance considerations
- ๐ Security measures
- ๐ข Deployment architecture
Chat with your codebase (Yoda Seek)
Ask questions about your code with real-time streaming:
yoda seek
Interactive commands:
> What does the SeekEngine class do?
> How is the RAG system implemented?
> Explain the indexing process
> clear # Clear conversation history
> help # Show help
> exit # Quit (or Ctrl+C twice)
Features:
- Real-time syntax-highlighted responses
- WISDOM.md integration for better context
- Quirky Yoda-themed messages
- Ctrl+C interruption support
Switch models
Choose a different LLM model:
yoda summon
Available models:
- codellama:7b (default, good balance)
- starcoder2:7b (specialized for code)
- deepseek-coder:6.7b (efficient)
Update the index
Re-index after making code changes:
yoda update
Troubleshooting
Tree-sitter build errors
If you encounter tree-sitter compilation issues:
- Install build tools:
xcode-select --install
- Set compiler explicitly:
export CC=clang
export CXX=clang++
pip install --upgrade tree-sitter
Faiss installation issues
If faiss-cpu fails to install:
- Use conda (alternative approach):
conda install -c pytorch faiss-cpu
- Or install from source:
pip install --no-binary :all: faiss-cpu
Ollama connection errors
If Yoda can't connect to Ollama (this should rarely happen as Yoda auto-starts it):
- Check if Ollama is running:
ps aux | grep ollama
- Manually restart Ollama:
pkill ollama
ollama serve &
- Verify model is available:
ollama list
Note: Yoda automatically handles Ollama installation and startup, so you typically won't need these commands.
Memory issues with large codebases
For large projects:
- Use a smaller model (specify during init):
yoda init . --model codellama:7b # default, uses less memory
# or for better quality but more memory:
yoda init . --model mistral:7b
- Increase chunk size in indexer (edit
src/yoda/core/indexer.py):
CHUNK_SIZE = 2048 # increase if needed
Python version compatibility
Ensure you're using Python 3.9 or higher:
python --version
If needed, specify the Python version:
python3.11 -m venv venv
Configuration
Yoda stores its index and metadata in .yoda/ within your project directory:
your-project/
โโโ .yoda/
โ โโโ index/ # Faiss vector index
โ โโโ documents/ # Parsed documents
โ โโโ config.json # Project configuration
Requirements
- Python 3.9+
- Ollama with a code-capable model (codellama, mistral, etc.)
- 4GB+ RAM recommended
- macOS, Linux, or Windows with WSL
๐ฆ Packaging & Publishing
For Users (Once Published to PyPI)
Install Yoda with a single command:
pip install yoda-cli
yoda init /path/to/your/project
For Maintainers
Prerequisites
- Install build tools:
pip install --upgrade build twine
-
Create accounts:
- PyPI (production): https://pypi.org/account/register/
- TestPyPI (testing): https://test.pypi.org/account/register/
-
Get API tokens:
-
Configure
.pypirc:
# Create ~/.pypirc
cat > ~/.pypirc << 'EOF'
[distutils]
index-servers =
pypi
testpypi
[pypi]
username = __token__
password = pypi-YOUR-PRODUCTION-TOKEN
[testpypi]
username = __token__
password = pypi-YOUR-TEST-TOKEN
EOF
chmod 600 ~/.pypirc
Version Management
Update version in both files before building:
setup.py:
version="0.1.0", # Update this
src/yoda/__init__.py:
__version__ = "0.1.0" # Update this
Follow semantic versioning: MAJOR.MINOR.PATCH
- MAJOR: Breaking changes
- MINOR: New features (backward compatible)
- PATCH: Bug fixes
Build Process
# Clean previous builds
rm -rf dist/ build/ *.egg-info
# Build the package (creates wheel and source distribution)
python -m build
# Verify the build
twine check dist/*
# View contents (optional)
tar -tzf dist/yoda-cli-*.tar.gz
Publishing Steps
1. Test on TestPyPI first (ALWAYS):
# Upload to TestPyPI
twine upload --repository testpypi dist/*
# Test install in a clean environment
python -m venv test_env
source test_env/bin/activate
pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ yoda-cli
# Test the installation
yoda --version
yoda init --help
# If everything works, deactivate and clean up
deactivate
rm -rf test_env
2. Publish to PyPI (production):
# Upload to production PyPI
twine upload dist/*
# Verify it's live
pip install yoda-cli
Post-Release
- Tag the release:
git tag -a v0.1.0 -m "Release version 0.1.0"
git push origin v0.1.0
-
Create GitHub release:
- Go to https://github.com/your-org/yoda/releases/new
- Select the tag
- Add release notes
-
Update changelog:
## [0.1.0] - 2025-01-15
### Added
- Initial release
- WISDOM.md generation with Mermaid diagrams
- Interactive chat with syntax highlighting
- Model switching support
Package Structure
Your package should have this structure:
yoda/
โโโ setup.py # Package configuration
โโโ README.md # This file (shown on PyPI)
โโโ LICENSE # MIT License
โโโ requirements.txt # Dependencies
โโโ MANIFEST.in # Include non-Python files
โโโ src/
โ โโโ yoda/
โ โโโ __init__.py # Version info
โ โโโ cli.py # Main CLI
โ โโโ core/ # Core modules
โ โโโ utils/ # Utilities
โโโ dist/ # Built packages (generated)
Common Issues
Issue: "File already exists"
# You're trying to upload the same version twice
# Solution: Bump the version number
Issue: Missing dependencies in installed package
# Solution: Ensure requirements.txt is complete
# Test in clean virtualenv
Issue: Module not found after install
# Solution: Check setup.py packages configuration
# Should be: packages=find_packages(where="src")
Automation (Optional)
Create .github/workflows/publish.yml for automatic releases:
name: Publish to PyPI
on:
release:
types: [published]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- run: pip install build twine
- run: python -m build
- run: twine upload dist/*
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
License
MIT License - See LICENSE file for details
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file yoda_cli-0.1.1.tar.gz.
File metadata
- Download URL: yoda_cli-0.1.1.tar.gz
- Upload date:
- Size: 35.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
26946aa369e8dd526b0ea7620e08d833d85d4674420a52e3baf0347b8e33fce0
|
|
| MD5 |
fa9266802e1c965466ca0f16309f7769
|
|
| BLAKE2b-256 |
51ecaa20db339dee8e65de9c685f3ab1c593785fed37cdab8cfb9a4ca977a531
|
File details
Details for the file yoda_cli-0.1.1-py3-none-any.whl.
File metadata
- Download URL: yoda_cli-0.1.1-py3-none-any.whl
- Upload date:
- Size: 30.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
99d6993182746cf2ee82825c813053179a26807c5f4d960d32770dd125fc7f94
|
|
| MD5 |
4d52ecd41e41c7b577ec379b5b93305c
|
|
| BLAKE2b-256 |
8c8618f36fc5ee86c435e8d17b3f3cdf50f9f539b0c60ff0d9b16cb8334ce1c9
|