Skip to main content

Intelligent PDF renaming using LLMs

Project description

PDF Renamer

PyPI version Python uv License: MIT pydantic-ai GitHub

Tests Code style: ruff Type checked: mypy Ruff

Intelligent PDF file renaming using LLMs. This tool analyzes PDF content and metadata to suggest descriptive, standardized filenames.

🚀 Works with OpenAI, Ollama, LM Studio, and any OpenAI-compatible API

Features

  • Advanced PDF parsing using docling-parse for better structure-aware extraction
  • OCR fallback for scanned PDFs with low text content
  • Smart LLM prompting with multi-pass analysis for improved accuracy
  • Suggests filenames in format: Author-Topic-Year.pdf
  • Dry-run mode to preview changes before applying
  • Enhanced interactive mode with options to accept, manually edit, retry, or skip each file
  • Live progress display with concurrent processing for speed
  • Configurable concurrency limits for API calls and PDF extraction
  • Batch processing of multiple PDFs with optional output directory

Installation

Quick Start (No Installation Required)

# Run directly with uvx
uvx pdf-renamer --dry-run /path/to/pdfs

Install from PyPI

# Using pip
pip install pdf-file-renamer

# Using uv
uv pip install pdf-file-renamer

Install from Source

# Clone and install
git clone https://github.com/nostoslabs/pdf-renamer.git
cd pdf-renamer
uv sync

Configuration

Configure your LLM provider:

Option A: OpenAI (Cloud)

cp .env.example .env
# Edit .env and add your OPENAI_API_KEY

Option B: Ollama or other local models

# No API key needed for local models
# Either set LLM_BASE_URL in .env or use --url flag
echo "LLM_BASE_URL=http://patmos:11434/v1" > .env

Usage

Quick Start

# Preview renames (dry-run mode)
pdf-renamer --dry-run /path/to/pdf/directory

# Actually rename files
pdf-renamer --no-dry-run /path/to/pdf/directory

# Interactive mode - review each file
pdf-renamer --interactive --no-dry-run /path/to/pdf/directory

Using uvx (No Installation)

# Run directly without installing
uvx pdf-renamer --dry-run /path/to/pdfs

# Run from GitHub
uvx https://github.com/nostoslabs/pdf-renamer --dry-run /path/to/pdfs

Options

  • --dry-run/--no-dry-run: Show suggestions without renaming (default: True)
  • --interactive, -i: Interactive mode with rich options:
    • Accept - Use the suggested filename
    • Edit - Manually modify the filename
    • Retry - Ask the LLM to generate a new suggestion
    • Skip - Skip this file and move to the next
  • --model: Model to use (default: llama3.2, works with any OpenAI-compatible API)
  • --url: Custom base URL for OpenAI-compatible APIs (default: http://localhost:11434/v1)
  • --pattern: Glob pattern for files (default: *.pdf)
  • --output-dir, -o: Move renamed files to a different directory
  • --max-concurrent-api: Maximum concurrent API calls (default: 3)
  • --max-concurrent-pdf: Maximum concurrent PDF extractions (default: 10)

Examples

Using OpenAI:

# Preview all PDFs in current directory
uvx pdf-renamer --dry-run .

# Rename PDFs in specific directory
uvx pdf-renamer --no-dry-run ~/Documents/Papers

# Use a different OpenAI model
uvx pdf-renamer --model gpt-4o --dry-run .

Using Ollama (or other local models):

# Using Ollama on patmos server with gemma model
uvx pdf-renamer --url http://patmos:11434/v1 --model gemma3:latest --dry-run .

# Using local Ollama with qwen model
uvx pdf-renamer --url http://localhost:11434/v1 --model qwen2.5 --dry-run .

# Set URL in environment and just use model flag
export LLM_BASE_URL=http://patmos:11434/v1
uvx pdf-renamer --model gemma3:latest --dry-run .

Other examples:

# Process only specific files
uvx pdf-renamer --pattern "*2020*.pdf" --dry-run .

# Interactive mode with local model
uvx pdf-renamer --url http://patmos:11434/v1 --model gemma3:latest --interactive --no-dry-run .

# Run directly from GitHub
uvx https://github.com/nostoslabs/pdf-renamer --no-dry-run ~/Documents/Papers

Interactive Mode

When using --interactive mode, you'll be presented with each file one at a time with detailed options:

================================================================================
Original: 2024-research-paper.pdf
Suggested: Smith-Machine-Learning-Applications-2024.pdf
Confidence: high
Reasoning: Clear author and topic identified from abstract
================================================================================

Options:
  y / yes / Enter - Accept suggested name
  e / edit - Manually edit the filename
  r / retry - Ask LLM to generate a new suggestion
  n / no / skip - Skip this file

What would you like to do? [y]:

This mode is perfect for:

  • Reviewing suggestions before applying them
  • Fine-tuning filenames that are close but not quite right
  • Retrying when the LLM suggestion isn't good enough
  • Building confidence in the tool before batch processing

You can use interactive mode with --dry-run to preview without actually renaming files, or with --no-dry-run to apply changes immediately after confirmation.

How It Works

  1. Extract: Uses docling-parse to read first 5 pages with structure-aware parsing, falls back to PyMuPDF if needed
  2. OCR: Automatically applies OCR for scanned PDFs with minimal text
  3. Metadata Enhancement: Extracts focused hints (years, emails, author sections) to supplement unreliable PDF metadata
  4. Analyze: Sends full content excerpt to LLM with enhanced metadata and detailed extraction instructions
  5. Multi-pass Review: Low-confidence results trigger a second analysis pass with focused prompts
  6. Suggest: LLM returns filename in Author-Topic-Year format with confidence level and reasoning
  7. Interactive Review (optional): User can accept, edit, retry, or skip each suggestion
  8. Rename: Applies suggestions (if not in dry-run mode)

Cost Considerations

OpenAI:

  • Uses gpt-4o-mini by default (very cost-effective)
  • Processes first ~4500 characters per PDF
  • Typical cost: ~$0.001-0.003 per PDF

Ollama/Local Models:

  • Completely free (runs on your hardware)
  • Works with any Ollama model (llama3, qwen2.5, mistral, etc.)
  • Also compatible with LM Studio, vLLM, and other OpenAI-compatible endpoints

Filename Format

The tool generates filenames in this format:

  • Smith-Kalman-Filtering-Applications-2020.pdf
  • Adamy-Electronic-Warfare-Modeling-Techniques.pdf
  • Blair-Monopulse-Processing-Unresolved-Targets.pdf

Guidelines:

  • First author's last name
  • 3-6 word topic description (prioritizes clarity over brevity)
  • Year (if identifiable)
  • Hyphens between words
  • Target ~80 characters (can be longer if needed for clarity)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pdf_file_renamer-0.5.0.tar.gz (4.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pdf_file_renamer-0.5.0-py3-none-any.whl (27.4 kB view details)

Uploaded Python 3

File details

Details for the file pdf_file_renamer-0.5.0.tar.gz.

File metadata

  • Download URL: pdf_file_renamer-0.5.0.tar.gz
  • Upload date:
  • Size: 4.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for pdf_file_renamer-0.5.0.tar.gz
Algorithm Hash digest
SHA256 fc4df3611265559492fc21ef079ec4bc7710842ca5cd37fbd9ff87884f55ec06
MD5 0a092fef5de8c1d3c1acf1f543b139e2
BLAKE2b-256 445340dc5d94a3b845fc2f49a884283a31c58c275099daf261075fd2ad88a59c

See more details on using hashes here.

File details

Details for the file pdf_file_renamer-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pdf_file_renamer-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6d7b2f74ad954034c42293a8c336ce57a4b23585b496a2cff9606639f038e549
MD5 a06885a472792603c2173bdd76fbd897
BLAKE2b-256 e883bbeb32f3f48a9f65b8b010ded7444f0411c2e22dce0fa22d7f4d28df9922

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page