Skip to main content

Intelligent PDF renaming using LLMs

Project description

PDF Renamer

PyPI version Python uv License: MIT pydantic-ai GitHub

Tests Code style: ruff Type checked: mypy Ruff

Intelligent PDF file renaming using LLMs. This tool analyzes PDF content and metadata to suggest descriptive, standardized filenames.

🚀 Works with OpenAI, Ollama, LM Studio, and any OpenAI-compatible API

Features

  • Advanced PDF parsing using docling-parse for better structure-aware extraction
  • OCR fallback for scanned PDFs with low text content
  • Smart LLM prompting with multi-pass analysis for improved accuracy
  • Suggests filenames in format: Author-Topic-Year.pdf
  • Dry-run mode to preview changes before applying
  • Enhanced interactive mode with options to accept, manually edit, retry, or skip each file
  • Live progress display with concurrent processing for speed
  • Configurable concurrency limits for API calls and PDF extraction
  • Batch processing of multiple PDFs with optional output directory

Installation

Quick Start (No Installation Required)

# Run directly with uvx
uvx pdf-renamer --dry-run /path/to/pdfs

Install from PyPI

# Using pip
pip install pdf-file-renamer

# Using uv
uv pip install pdf-file-renamer

Install from Source

# Clone and install
git clone https://github.com/nostoslabs/pdf-renamer.git
cd pdf-renamer
uv sync

Configuration

Configure your LLM provider:

Option A: OpenAI (Cloud)

cp .env.example .env
# Edit .env and add your OPENAI_API_KEY

Option B: Ollama or other local models

# No API key needed for local models
# Either set LLM_BASE_URL in .env or use --url flag
echo "LLM_BASE_URL=http://patmos:11434/v1" > .env

Usage

Quick Start

# Preview renames (dry-run mode)
pdf-renamer --dry-run /path/to/pdf/directory

# Actually rename files
pdf-renamer --no-dry-run /path/to/pdf/directory

# Interactive mode - review each file
pdf-renamer --interactive --no-dry-run /path/to/pdf/directory

Using uvx (No Installation)

# Run directly without installing
uvx pdf-renamer --dry-run /path/to/pdfs

# Run from GitHub
uvx https://github.com/nostoslabs/pdf-renamer --dry-run /path/to/pdfs

Options

  • --dry-run/--no-dry-run: Show suggestions without renaming (default: True)
  • --interactive, -i: Interactive mode with rich options:
    • Accept - Use the suggested filename
    • Edit - Manually modify the filename
    • Retry - Ask the LLM to generate a new suggestion
    • Skip - Skip this file and move to the next
  • --model: Model to use (default: llama3.2, works with any OpenAI-compatible API)
  • --url: Custom base URL for OpenAI-compatible APIs (default: http://localhost:11434/v1)
  • --pattern: Glob pattern for files (default: *.pdf)
  • --output-dir, -o: Move renamed files to a different directory
  • --max-concurrent-api: Maximum concurrent API calls (default: 3)
  • --max-concurrent-pdf: Maximum concurrent PDF extractions (default: 10)

Examples

Using OpenAI:

# Preview all PDFs in current directory
uvx pdf-renamer --dry-run .

# Rename PDFs in specific directory
uvx pdf-renamer --no-dry-run ~/Documents/Papers

# Use a different OpenAI model
uvx pdf-renamer --model gpt-4o --dry-run .

Using Ollama (or other local models):

# Using Ollama on patmos server with gemma model
uvx pdf-renamer --url http://patmos:11434/v1 --model gemma3:latest --dry-run .

# Using local Ollama with qwen model
uvx pdf-renamer --url http://localhost:11434/v1 --model qwen2.5 --dry-run .

# Set URL in environment and just use model flag
export LLM_BASE_URL=http://patmos:11434/v1
uvx pdf-renamer --model gemma3:latest --dry-run .

Other examples:

# Process only specific files
uvx pdf-renamer --pattern "*2020*.pdf" --dry-run .

# Interactive mode with local model
uvx pdf-renamer --url http://patmos:11434/v1 --model gemma3:latest --interactive --no-dry-run .

# Run directly from GitHub
uvx https://github.com/nostoslabs/pdf-renamer --no-dry-run ~/Documents/Papers

Interactive Mode

When using --interactive mode, you'll be presented with each file one at a time with detailed options:

================================================================================
Original: 2024-research-paper.pdf
Suggested: Smith-Machine-Learning-Applications-2024.pdf
Confidence: high
Reasoning: Clear author and topic identified from abstract
================================================================================

Options:
  y / yes / Enter - Accept suggested name
  e / edit - Manually edit the filename
  r / retry - Ask LLM to generate a new suggestion
  n / no / skip - Skip this file

What would you like to do? [y]:

This mode is perfect for:

  • Reviewing suggestions before applying them
  • Fine-tuning filenames that are close but not quite right
  • Retrying when the LLM suggestion isn't good enough
  • Building confidence in the tool before batch processing

You can use interactive mode with --dry-run to preview without actually renaming files, or with --no-dry-run to apply changes immediately after confirmation.

How It Works

  1. Extract: Uses docling-parse to read first 5 pages with structure-aware parsing, falls back to PyMuPDF if needed
  2. OCR: Automatically applies OCR for scanned PDFs with minimal text
  3. Metadata Enhancement: Extracts focused hints (years, emails, author sections) to supplement unreliable PDF metadata
  4. Analyze: Sends full content excerpt to LLM with enhanced metadata and detailed extraction instructions
  5. Multi-pass Review: Low-confidence results trigger a second analysis pass with focused prompts
  6. Suggest: LLM returns filename in Author-Topic-Year format with confidence level and reasoning
  7. Interactive Review (optional): User can accept, edit, retry, or skip each suggestion
  8. Rename: Applies suggestions (if not in dry-run mode)

Cost Considerations

OpenAI:

  • Uses gpt-4o-mini by default (very cost-effective)
  • Processes first ~4500 characters per PDF
  • Typical cost: ~$0.001-0.003 per PDF

Ollama/Local Models:

  • Completely free (runs on your hardware)
  • Works with any Ollama model (llama3, qwen2.5, mistral, etc.)
  • Also compatible with LM Studio, vLLM, and other OpenAI-compatible endpoints

Filename Format

The tool generates filenames in this format:

  • Smith-Kalman-Filtering-Applications-2020.pdf
  • Adamy-Electronic-Warfare-Modeling-Techniques.pdf
  • Blair-Monopulse-Processing-Unresolved-Targets.pdf

Guidelines:

  • First author's last name
  • 3-6 word topic description (prioritizes clarity over brevity)
  • Year (if identifiable)
  • Hyphens between words
  • Target ~80 characters (can be longer if needed for clarity)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pdf_file_renamer-0.4.2.tar.gz (22.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pdf_file_renamer-0.4.2-py3-none-any.whl (27.4 kB view details)

Uploaded Python 3

File details

Details for the file pdf_file_renamer-0.4.2.tar.gz.

File metadata

  • Download URL: pdf_file_renamer-0.4.2.tar.gz
  • Upload date:
  • Size: 22.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for pdf_file_renamer-0.4.2.tar.gz
Algorithm Hash digest
SHA256 f2d9d790a2c7b910cef82c0907f4b42d57524827221ae6545bbeeb2f855b574a
MD5 c1184d5915c73727e8f50217ca633c44
BLAKE2b-256 d8fd12b2d60266d4a8e39f9e5c4f63b867de65de3baf02041b407c17372b7954

See more details on using hashes here.

File details

Details for the file pdf_file_renamer-0.4.2-py3-none-any.whl.

File metadata

File hashes

Hashes for pdf_file_renamer-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 30fc3c830734aad266a07f6df4f3d95cf424175ba12a8dab74263ec0fbc44e82
MD5 d464eb7dff6daf908d80a4da3e068115
BLAKE2b-256 3a0e12057556920566ee682061be24d85e4365819f34a5bf19549e0b0ed46fdc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page