A Python module to capture knowledge from documents using Vision Language Models (VLMs)
Project description
AI Vision Capture
A powerful Python library for extracting and analyzing content from PDF, Image, and Video files using Vision Language Models (VLMs). This library provides a flexible and efficient way to process documents with support for multiple VLM providers including OpenAI, Anthropic Claude, Google Gemini, and Azure OpenAI.
Features
- 🔍 Multi-Provider Support: Compatible with major VLM providers (OpenAI, Claude, Gemini, Azure, OpenSource models)
- 📄 Document Processing: Process PDFs and images (JPG, PNG, TIFF, WebP, BMP)
- 🎥 Video Processing: Extract and analyze frames from video files (MP4, AVI, MOV, MKV)
- 🚀 Async Processing: Asynchronous processing with configurable concurrency
- 💾 Two-Layer Caching: Local file system and cloud caching for improved performance
- 🔄 Batch Processing: Process multiple documents in parallel
- 📝 Text Extraction: Enhanced accuracy through combined OCR and VLM processing
- 🎨 Image Quality Control: Configurable image quality settings
- 📊 Structured Output: Well-organized JSON and Markdown output
Installation
pip install aicapture
Environment Setup
- Set your chosen provider and API key:
# For OpenAI
export USE_VISION=openai
export OPENAI_API_KEY=your_openai_key
# For Anthropic
export USE_VISION=anthropic
export ANTHROPIC_API_KEY=your_anthropic_key
# For Gemini
export USE_VISION=gemini
export GEMINI_API_KEY=your_google_key
- Optional performance settings:
export MAX_CONCURRENT_TASKS=5 # Number of concurrent processing tasks
export VISION_PARSER_DPI=333 # Image DPI for PDF processing
Core Capabilities
1. Document Parsing
The VisionParser provides general document processing capabilities for extracting unstructured content from documents.
from aicapture import VisionParser
# Initialize parser
parser = VisionParser()
# Process a single PDF
result = parser.process_pdf("path/to/your/document.pdf")
# Process a single image
result = parser.process_image("path/to/your/image.jpg")
# Process multiple documents asynchronously
async def process_folder():
return await parser.process_folder_async("path/to/folder")
Parser Output Format
{
"file_object": {
"file_name": "example.pdf",
"file_hash": "sha256_hash",
"total_pages": 10,
"total_words": 5000,
"pages": [
{
"page_number": 1,
"page_content": "extracted content",
"page_hash": "sha256_hash"
}
]
}
}
2. Structured Data Capture
The VisionCapture component enables extraction of structured data from images using customizable templates.
- Define your data template:
# Example template for technical alarm logic
ALARM_TEMPLATE = """
alarm:
description: string # Main alarm description
destination: string # Destination system
tag: string # Alarm tag
ref_logica: integer # Logic reference number
dependencies:
type: array
items:
- signal_name: string # Name of the dependency signal
source: string # Source system/component
tag: string # Signal tag
ref_logica: integer|null # Logic reference (can be null)
"""
- Use with OpenAI Vision:
from aicapture import VisionCapture
from aicapture import OpenAIVisionModel
vision_model = OpenAIVisionModel(
model="gpt-4o",
max_tokens=4096,
api_key="your_openai_key"
)
capture = VisionCapture(vision_model=vision_model)
result = await capture.capture(
file_path="path/to/image.png",
template=ALARM_TEMPLATE
)
- Or use with Anthropic Claude:
from aicapture import AnthropicVisionModel
vision_model = AnthropicVisionModel(
model="claude-3-sonnet-20240620",
max_tokens=4096,
api_key="your_anthropic_key"
)
capture = VisionCapture(vision_model=vision_model)
result = await capture.capture(
file_path="path/to/example.pdf",
template=ALARM_TEMPLATE
)
3. Video Processing
The VidCapture component enables extraction of knowledge from video files by extracting frames and analyzing them with VLMs.
from aicapture import VidCapture, VideoConfig
# Configure video capture with custom settings
config = VideoConfig(
frame_rate=2, # Extract 2 frames per second
max_duration_seconds=30, # Process up to 30 seconds of video
target_frame_size=(768, 768), # Resize frames for optimal processing
supported_formats=(".mp4", ".avi", ".mov", ".mkv")
)
# Initialize video capture
video_capture = VidCapture(config)
# Process a video file with a custom prompt
result = video_capture.process_video(
video_path="path/to/your/video.mp4",
prompt="Describe what is happening in this video."
)
# Or extract frames for custom processing
frames, interval = video_capture.extract_frames("path/to/your/video.mp4")
print(f"Extracted {len(frames)} frames at {interval:.2f}s intervals")
# Analyze the extracted frames with a custom prompt
result = video_capture.capture(
prompt="Analyze these video frames and describe key objects and actions.",
images=frames
)
Advanced Usage
Custom Vision Model Configuration
from aicapture import VisionParser, GeminiVisionModel
# Configure Gemini vision model with custom settings
vision_model = GeminiVisionModel(
model="gemini-2.0-flash",
api_key="your_gemini_api_key"
)
# Initialize parser with custom configuration
parser = VisionParser(
vision_model=vision_model,
dpi=400,
prompt="""
Please analyze this technical document and extract:
1. Equipment specifications and model numbers
2. Operating parameters and limits
3. Maintenance requirements
4. Safety protocols
5. Quality control metrics
"""
)
# Process PDF with custom settings
result = parser.process_pdf(
pdf_path="path/to/document.pdf",
)
Development Setup
For local development:
- Clone the repository
- Copy
.env.templateto.env - Edit
.envwith your settings - Install development dependencies:
pip install -e ".[dev]"
See .env.template for all available configuration options.
Documentation
For detailed configuration options and examples, see:
Coming Soon
- 🔗 Cross-Document Knowledge Capture: Capture structured knowledge across multiple documents
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/tiny-but-mighty) - Commit your changes (
git commit -m 'feat: add small but delightful improvement') - Push to the branch (
git push origin feature/tiny-but-mighty) - Open a Pull Request
For detailed guidelines, see our Contributing Guide.
License
Copyright 2024 Aitomatic, Inc.
Licensed under the Apache License, Version 2.0. See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aicapture-0.2.1.tar.gz.
File metadata
- Download URL: aicapture-0.2.1.tar.gz
- Upload date:
- Size: 27.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.2 CPython/3.10.4 Darwin/24.3.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4354b5ae1be31ba36548f0507a32e8bdded888b063a93313de6f74a31a91d6b8
|
|
| MD5 |
98401e0f188e6823da4f3faac96f3413
|
|
| BLAKE2b-256 |
c394904164ae25a7e73af27b951c602ed818b585cc4556c6c1fbf4e06338eb4c
|
File details
Details for the file aicapture-0.2.1-py3-none-any.whl.
File metadata
- Download URL: aicapture-0.2.1-py3-none-any.whl
- Upload date:
- Size: 28.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.2 CPython/3.10.4 Darwin/24.3.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b2dd132a41452b4b855cebb4e5e552e127f48856f3f82bda3157dc8fe858e0b
|
|
| MD5 |
001115f03cb92a0ac144fbadd54a4d1b
|
|
| BLAKE2b-256 |
39f0025eb834520449ab3e6283f7d41f7215119880bc305f86f27db13e716ac1
|