Skip to main content

Parse PDF documents into markdown formatted content using Vision LLMs

Project description

Vision Parse

License: MIT Author: Arun Brahma PyPI version

🚀 Parse PDF documents into beautifully formatted markdown content using state-of-the-art Vision Language Models - all with just a few lines of code!

🎯 Introduction

Vision Parse harnesses the power of Vision Language Models to revolutionize document processing:

  • 📝 Smart Content Extraction: Intelligently identifies and extracts text and tables with high precision
  • 🎨 Content Formatting: Preserves document hierarchy, styling, and indentation for markdown formatted content
  • 🤖 Multi-LLM Support: Supports multiple Vision LLM providers i.e. OpenAI, LLama, Gemini etc. for accuracy and speed
  • 🔄 PDF Document Support: Handle multi-page PDF documents effortlessly by converting each page into byte64 encoded images
  • 📁 Local Model Hosting: Supports local model hosting using Ollama for secure document processing and for offline use

🚀 Getting Started

Prerequisites

  • 🐍 Python >= 3.9
  • 🖥️ Ollama (if you want to use local models)
  • 🤖 API Key for OpenAI or Google Gemini (if you want to use OpenAI or Google Gemini)

Installation

Install the core package using pip (Recommended):

pip install vision-parse

Install the additional dependencies for OpenAI or Gemini:

# For OpenAI support
pip install 'vision-parse[openai]'
# For Gemini support
pip install 'vision-parse[gemini]'
# To install all the additional dependencies
pip install 'vision-parse[all]'

Install the package from source:

pip install 'git+https://github.com/iamarunbrahma/vision-parse.git#egg=vision-parse[all]'

Setting up Ollama (Optional)

See examples/ollama_setup.md on how to setup Ollama locally.

⌛️ Usage

Basic Example Usage

from vision_parse import VisionParser

# Initialize parser
parser = VisionParser(
    model_name="llama3.2-vision:11b", # For local models, you don't need to provide the api key
    temperature=0.4,
    top_p=0.5,
    image_mode="url", # Image mode can be "url", "base64" or None
    detailed_extraction=False, # Set to True for more detailed extraction
    enable_concurrency=False, # Set to True for parallel processing
)

# Convert PDF to markdown
pdf_path = "path/to/your/document.pdf"
markdown_pages = parser.convert_pdf(pdf_path)

# Process results
for i, page_content in enumerate(markdown_pages):
    print(f"\n--- Page {i+1} ---\n{page_content}")

Customized Ollama Configuration

from vision_parse import VisionParser

# Initialize parser with Ollama configuration
parser = VisionParser(
    model_name="llama3.2-vision:11b",
    temperature=0.7,
    top_p=0.6,
    num_ctx=4096,
    image_mode="base64",
    detailed_extraction=True,
    ollama_config={
        "OLLAMA_NUM_PARALLEL": "4",
    },
    enable_concurrency=True,
)

# Convert PDF to markdown
pdf_path = "path/to/your/document.pdf"
markdown_pages = parser.convert_pdf(pdf_path)

OpenAI or Gemini Model Usage

from vision_parse import VisionParser

# Initialize parser with OpenAI model
parser = VisionParser(
    model_name="gpt-4o",
    api_key="your-openai-api-key", # Get the OpenAI API key from https://platform.openai.com/api-keys
    temperature=0.7,
    top_p=0.4,
    image_mode="url",
    detailed_extraction=True, # Set to True for more detailed extraction
    enable_concurrency=True,
)

# Initialize parser with Google Gemini model
parser = VisionParser(
    model_name="gemini-1.5-flash",
    api_key="your-gemini-api-key", # Get the Gemini API key from https://aistudio.google.com/app/apikey
    temperature=0.7,
    top_p=0.4,
    image_mode="url",
    detailed_extraction=True, # Set to True for more detailed extraction
    enable_concurrency=True,
)

✅ Supported Models

This package supports the following Vision LLM models:

  • OpenAI: gpt-4o, gpt-4o-mini
  • Google Gemini: gemini-1.5-flash, gemini-2.0-flash-exp, gemini-1.5-pro
  • Meta Llama and LLava from Ollama: llava:13b, llava:34b, llama3.2-vision:11b, llama3.2-vision:70b

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision_parse-0.1.7.tar.gz (49.4 kB view details)

Uploaded Source

Built Distribution

vision_parse-0.1.7-py3-none-any.whl (14.0 kB view details)

Uploaded Python 3

File details

Details for the file vision_parse-0.1.7.tar.gz.

File metadata

  • Download URL: vision_parse-0.1.7.tar.gz
  • Upload date:
  • Size: 49.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for vision_parse-0.1.7.tar.gz
Algorithm Hash digest
SHA256 11bf8ff986d8ac8fbbfe0d3237decf2e69709b078f94ad8e29bbf684a827a5b3
MD5 bb136cd6e7ba873d500b3cc952f7ac8e
BLAKE2b-256 dd221a1ff79dc272e46f1d342ec2edca7e3fb5c64ecadd2887a4fcd63d275e1d

See more details on using hashes here.

File details

Details for the file vision_parse-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: vision_parse-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 14.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for vision_parse-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 16030eaa084fbda703e052f4aee472f0893b9c0813613c35662fb3b0de00d7c7
MD5 3c6812991bd81beaf06c2d76beca2d73
BLAKE2b-256 582a403458514e729781fcb35fb824bbfe7b0f1307a1adddb153ebf7a26b846a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page