AI-powered local file management with state-of-the-art models
Project description
File Organizer v2.0
AI-powered local file management. Local-first by default (Ollama, no cloud required) -- or connect any OpenAI-compatible endpoint or Anthropic Claude when you need it.
307 tests | 334 modules | 48+ file types | Python 3.11+
Features
- AI-Powered Organization: Qwen 2.5 3B (text) + Qwen 2.5-VL 7B (vision) via Ollama — or any OpenAI-compatible endpoint (OpenAI, LM Studio, vLLM) — or Anthropic Claude
- Audio Transcription: Local speech-to-text with faster-whisper (GPU-accelerated)
- Video Analysis: Scene detection and keyframe extraction
- Copilot Chat: Natural-language assistant -- "organize ./Downloads", "find report.pdf", "undo"
- Organization Rules: Automated sorting with conditions, preview, and YAML persistence
- Terminal UI: 8-view Textual TUI (Files, Analytics, Audio, History, Copilot, and more)
- Web UI: Browser-based interface via FastAPI and HTMX
- Full CLI: Organize, rules, suggest, dedupe, daemon, analytics, update, profiles
- Auto-Update: GitHub Releases checks with verified downloads and rollback
- Intelligence: Pattern learning, preference tracking, smart suggestions, auto-tagging
- Deduplication: Hash and semantic duplicate detection
- Undo/Redo: Full operation history
- PARA + Johnny Decimal: Built-in organizational methodologies
- Cross-Platform: macOS (DMG), Windows (installer), Linux (AppImage) executables
Screenshots
Quick Start
pip install -e .
# Pull models
ollama pull qwen2.5:3b-instruct-q4_K_M
ollama pull qwen2.5vl:7b-q4_K_M
# Organize files (dry run first)
file-organizer organize ./Downloads ./Organized --dry-run
# Launch the TUI
file-organizer tui
Web UI (Preview)
Start the FastAPI server and open the UI:
uvicorn file_organizer.api.main:app --reload
Then visit http://localhost:8000/ui/ for the HTMX interface.
Documentation
Optional Feature Packs
| Pack | Install Command | Features |
|---|---|---|
| Cloud | pip install -e ".[cloud]" |
OpenAI-compatible API provider (OpenAI, LM Studio, vLLM) |
| Claude | pip install -e ".[claude]" |
Anthropic Claude API provider (text + vision) |
| LLaMA | pip install -e ".[llama]" |
Local llama.cpp inference (GGUF models, no Ollama needed) |
| Audio | pip install -e ".[audio]" |
Speech-to-text (faster-whisper, torch) |
| Video | pip install -e ".[video]" |
Scene detection (OpenCV, scenedetect) |
| Dedup | pip install -e ".[dedup]" |
Image deduplication (perceptual hashing) |
| Archive | pip install -e ".[archive]" |
7z and RAR archive support |
| Scientific | pip install -e ".[scientific]" |
HDF5, NetCDF, MATLAB formats |
| CAD | pip install -e ".[cad]" |
DXF and CAD format support |
| Build | pip install -e ".[build]" |
Executable packaging (PyInstaller) |
| All | pip install -e ".[all]" |
Everything above |
Audio system dependencies
For full audio format support, the [audio] pack uses FFmpeg (all platforms) and optionally CUDA + cuDNN (NVIDIA GPU users).
FFmpeg — required for non-.wav formats (MP3, M4A, FLAC, OGG); optional if you only transcribe raw .wav:
# macOS
brew install ffmpeg
# Ubuntu / Debian
sudo apt install ffmpeg
# Windows (winget)
winget install ffmpeg
CUDA + cuDNN — optional, for significantly faster transcription (see faster-whisper benchmarks for hardware-specific numbers):
# Install CUDA Toolkit from https://developer.nvidia.com/cuda-downloads
# Install cuDNN from https://developer.nvidia.com/cudnn
# Verify the full transcription backend (not just PyTorch)
python3 -c "from faster_whisper import WhisperModel; print('faster-whisper OK')"
python3 -c "import torch; print('CUDA:', torch.cuda.is_available())"
Fallback behavior: without FFmpeg, only .wav files are transcribed; other formats are organized by filename/metadata but not content-analyzed. Without CUDA, transcription runs on CPU (slower but fully functional).
See the Installation Guide for troubleshooting and advanced configuration.
Development
# Run tests
pytest
# Lint
ruff check src/
Configuration
Config lives in config/file-organizer/config.yaml relative to your config home. Override with FILE_ORGANIZER_CONFIG.
Status: Alpha | Version: 2.0.0-alpha.1 | Last Updated: 2026-03-01
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file local_file_organizer-2.0.0a3.tar.gz.
File metadata
- Download URL: local_file_organizer-2.0.0a3.tar.gz
- Upload date:
- Size: 868.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c8657762c20a085ece167630061a66dfb13ad8d4c1095235d328bdc8b87c7613
|
|
| MD5 |
37b23115ccd0fbb5e009320b7d023e76
|
|
| BLAKE2b-256 |
aafa7554a106ea05f57e0797b2ccf61f4a0f09cb9d7286143dcda07c986bc6f0
|
Provenance
The following attestation bundles were made for local_file_organizer-2.0.0a3.tar.gz:
Publisher:
release.yml on curdriceaurora/Local-File-Organizer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
local_file_organizer-2.0.0a3.tar.gz -
Subject digest:
c8657762c20a085ece167630061a66dfb13ad8d4c1095235d328bdc8b87c7613 - Sigstore transparency entry: 1195938229
- Sigstore integration time:
-
Permalink:
curdriceaurora/Local-File-Organizer@7eaae2cbfc0a8ec923e4b7be9d298864d4944031 -
Branch / Tag:
refs/tags/v2.0.0-alpha.3 - Owner: https://github.com/curdriceaurora
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@7eaae2cbfc0a8ec923e4b7be9d298864d4944031 -
Trigger Event:
push
-
Statement type:
File details
Details for the file local_file_organizer-2.0.0a3-py3-none-any.whl.
File metadata
- Download URL: local_file_organizer-2.0.0a3-py3-none-any.whl
- Upload date:
- Size: 1.0 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8295b0b4dc2064f0e896b2f4c9f89b074f4b910b0e00600c0a2343774bb5ad45
|
|
| MD5 |
f36c027faba88bda4825d812d9a311c7
|
|
| BLAKE2b-256 |
16e1490c380e23c0ffd1f8082eeac6d6cca95949728c581e698419265881cd89
|
Provenance
The following attestation bundles were made for local_file_organizer-2.0.0a3-py3-none-any.whl:
Publisher:
release.yml on curdriceaurora/Local-File-Organizer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
local_file_organizer-2.0.0a3-py3-none-any.whl -
Subject digest:
8295b0b4dc2064f0e896b2f4c9f89b074f4b910b0e00600c0a2343774bb5ad45 - Sigstore transparency entry: 1195938280
- Sigstore integration time:
-
Permalink:
curdriceaurora/Local-File-Organizer@7eaae2cbfc0a8ec923e4b7be9d298864d4944031 -
Branch / Tag:
refs/tags/v2.0.0-alpha.3 - Owner: https://github.com/curdriceaurora
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@7eaae2cbfc0a8ec923e4b7be9d298864d4944031 -
Trigger Event:
push
-
Statement type: