Skip to main content

A tool that automatically generates step-by-step documentation from instructional videos

Project description

VideoInstruct

VideoInstruct is a tool that automatically generates step-by-step documentation from instructional videos. It uses AI to extract transcriptions, interpret video content, and create comprehensive markdown guides.

Features

  • Automatic video transcription extraction
  • AI-powered video interpretation
  • Step-by-step documentation generation
  • Automated documentation quality evaluation with conversation memory
  • Interactive Q&A workflow between AI agents
  • User feedback integration for documentation refinement
  • Configurable escalation to human users
  • Screenshot generation and annotation
  • PDF export capabilities

Project Structure

VideoInstruct/
├── data/                  # Place your video files here
├── examples/              # Example usage scripts
│   ├── example_usage.py   # Basic example with repository structure
├── output/                # Generated documentation output
├── videoinstruct/         # Main package
│   ├── agents/            # AI agent modules
│   │   ├── DocGenerator.py      # Documentation generation agent
│   │   ├── DocEvaluator.py      # Documentation evaluation agent
│   │   ├── VideoInterpreter.py  # Video interpretation agent
│   │   └── ScreenshotAgent.py   # Screenshot generation agent
│   ├── prompts/           # System prompts for agents
│   ├── tools/             # Utility tools
│   │   ├── image_annotator.py   # Image annotation tools
│   │   └── video_screenshot.py  # Video screenshot tools
│   ├── utils/             # Utility functions
│   │   ├── transcription.py     # Video transcription utilities
│   │   └── md2pdf.py            # Markdown to PDF conversion
│   ├── cli.py             # Command-line interface
│   ├── configs.py         # Configuration classes
│   ├── prompt_loader.py   # Prompt loading utilities
│   └── videoinstructor.py # Main orchestration class
├── .env                   # Environment variables (API keys)
├── MANIFEST.in            # Package manifest file
├── pyproject.toml         # Python project configuration
├── requirements.txt       # Package dependencies
├── setup.py               # Package setup file
└── README.md              # This file

Requirements

  • Python 3.8+
  • OpenAI API key (for DocGenerator)
  • Google Gemini API key (for VideoInterpreter)
  • DeepSeek API key (for DocEvaluator)
  • FFmpeg (for video processing)

Installation

From PyPI

pip install videoinstruct

From Source

  1. Clone the repository:

    git clone https://github.com/PouriaRouzrokh/VideoInstruct.git
    cd VideoInstruct
    
  2. Install the package in development mode:

    pip install -e .
    
  3. Create a .env file in the root directory with your API keys:

    OPENAI_API_KEY=your_openai_api_key
    GEMINI_API_KEY=your_gemini_api_key
    DEEPSEEK_API_KEY=your_deepseek_api_key
    

Examples

The repository includes two example scripts to help you get started:

  1. example_usage.py: Demonstrates direct usage with the repository structure and hardcoded paths. This is useful if you're working directly with the repository without installing it as a package.

  2. package_usage.py: Shows how to use VideoInstruct after it's been installed as a package. This example demonstrates:

    • Using VideoInstruct as an imported Python package in your code
    • Using VideoInstruct from the command line

To run the examples:

# Run the basic example
python examples/example_usage.py

# Run the package usage example
python examples/package_usage.py

Using as a Python Package

You can use VideoInstruct as a Python package in your own projects:

from videoinstruct import VideoInstructor, VideoInstructorConfig
from videoinstruct.agents.DocGenerator import DocGeneratorConfig
from videoinstruct.agents.VideoInterpreter import VideoInterpreterConfig
from videoinstruct.agents.DocEvaluator import DocEvaluatorConfig
from pathlib import Path

# Create configuration
config = VideoInstructorConfig(
    doc_generator_config=DocGeneratorConfig(
        model="gpt-4o-mini",
        temperature=0.7,
        max_output_tokens=4000
    ),
    video_interpreter_config=VideoInterpreterConfig(
        model="gemini-2.0-flash",
        temperature=0.7
    ),
    doc_evaluator_config=DocEvaluatorConfig(
        model="deepseek/deepseek-reasoner",
        temperature=0.2,
        max_rejection_count=3
    ),
    user_feedback_interval=3,
    max_iterations=15,
    output_dir="output",
    temp_dir="temp"
)

# Initialize VideoInstructor
instructor = VideoInstructor(config)

# Process a video
video_path = Path("path/to/your/video.mp4")
output_path = instructor.process_video(video_path)

print(f"Documentation generated successfully: {output_path}")

Using the Command Line Interface

VideoInstruct comes with a command-line interface:

# Basic usage
videoinstruct path/to/your/video.mp4

# With custom options
videoinstruct path/to/your/video.mp4 \
    --output-dir custom_output \
    --temp-dir custom_temp \
    --max-iterations 10 \
    --user-feedback-interval 2 \
    --doc-generator-model "gpt-4o" \
    --video-interpreter-model "gemini-2.0-pro" \
    --doc-evaluator-model "deepseek/deepseek-reasoner"

Workflow

VideoInstruct follows this workflow:

  1. Transcription: Extract text from the video
  2. Initial Description: Get a detailed visual description from VideoInterpreter
  3. Documentation Generation: DocGenerator creates initial documentation
  4. User Preview: Generated documentation is shown to the user before evaluation
  5. Documentation Evaluation: DocEvaluator assesses documentation quality
    • Provides feedback on each evaluation round
    • Maintains conversation memory for context-aware evaluation
    • Escalates to human user after a configurable number of rejections
  6. Refinement: Documentation is refined based on evaluator feedback
  7. User Feedback: User provides final approval or additional feedback
  8. Output: Final documentation is saved as markdown and optionally as PDF

Development

To contribute to VideoInstruct:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature-name
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin feature-name
  5. Submit a pull request

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

videoinstruct-0.1.1.tar.gz (35.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

videoinstruct-0.1.1-py3-none-any.whl (40.2 kB view details)

Uploaded Python 3

File details

Details for the file videoinstruct-0.1.1.tar.gz.

File metadata

  • Download URL: videoinstruct-0.1.1.tar.gz
  • Upload date:
  • Size: 35.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for videoinstruct-0.1.1.tar.gz
Algorithm Hash digest
SHA256 8e0e92b7de759d811d9482ca22daced1f01bb8c09de3f4164e6c54b9460613c5
MD5 f14a36107811d1c40206da0b04331f01
BLAKE2b-256 57b3c3a8691479dc03bbbbb988f559bff64c18e971913403dac8e89adef2ffd2

See more details on using hashes here.

File details

Details for the file videoinstruct-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: videoinstruct-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 40.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for videoinstruct-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c05829d9187a6acdb8c1c88586b6aa6cfc8235db9328b156c321e4f7b5498048
MD5 b3e79fe062dad7a3cec06971364654ac
BLAKE2b-256 7c6d2def4af4d5212e14b0b26602ebe50a78b106a993f1cab5c3737fcdd55e87

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page