CLI tool to extract and select thumbnail images from video files
This project has been archived.
The maintainers of this project have marked this project as archived. No new releases are expected.
Project description
video-thumbnail-creator
CLI tool to extract and select a thumbnail (poster) image from a video or image file. Supports manual selection, fully automatic AI selection (Claude Vision), and a semi-automatic suggest mode where the AI proposes a frame and you confirm.
Every generated JPEG has creation metadata (EXIF) embedded so that the exact parameters
used during generation are preserved and can be read back with the info subcommand.
This applies to the CLI, the high-level create_thumbnail() helper, and the
mid-level ThumbnailSession.compose() API for the main composed JPEG output.
When re-generating a poster from a video that already has a VTC-generated embedded poster,
the tool automatically re-extracts the original frame from the video using the stored
frame_index metadata — ensuring a clean, high-quality source for the new composition.
Note: This tool extracts and saves a still image only. Embedding the image into the video file (e.g. with AtomicParsley) is out of scope and must be handled by the caller.
Requirements
- Python 3.10+
- ffmpeg and
ffprobeavailable in$PATH(required for video input; not needed for image-only input) - macOS only:
sips— built-in macOS image tool, used for wide-gamut color space conversion (TIFF/HEIC with Rec.2020 or Display P3). On Linux/Windows, Pillow is used as a fallback but may not handle wide-gamut images correctly. - An Anthropic API key for
auto/suggestmodes (set viaCLAUDE_API_KEYenv variable orvideo-thumbnail-creator config set claude.api_key <key>)
Installation
pip install video-thumbnail-creator
Or in editable/development mode:
pip install -e .
Usage
video-thumbnail-creator extract <input_path> [OPTIONS]
Options
| Option | Description |
|---|---|
--mode manual |
Interactive: open mosaic, enter frame number 0–19 |
--mode auto |
Fully automatic: AI selects the best frame |
--mode suggest |
AI suggests a frame; you confirm or override |
--format poster |
Output format 2:3 (1080×1620) with 1:1 crop + text area (default) |
--format landscape |
Output format 16:9 (1920×1080) — existing behaviour |
--embedded-image prefer |
Use embedded cover art or sidecar image if present; otherwise extract frames |
--embedded-image ignore |
Always extract frames (ignore any embedded cover art or sidecar images) |
--embedded-image ask |
Prompt user when embedded cover art or sidecar image is found (default) |
--crop-position POSITION |
Set crop position directly (left, center-left, center, center-right, right); skips interactive prompt and AI crop selection |
--overlay-title TEXT |
Title text to overlay on the output image |
--overlay-title-from-filename |
Use the input filename stem as overlay title |
--overlay-category TEXT |
Category label shown above the title (poster format only) |
--overlay-category-logo PATH |
PNG logo shown instead of category text (poster format only); wide logos centered above title, square/portrait logos to the left |
--overlay-note TEXT |
Small text centered at the bottom of the poster text area (poster format only) |
--style NAME |
Poster style to use (default: internet). Run video-thumbnail-creator styles to list available options |
--description TEXT |
Optional video description for AI context (max 1000 chars) |
--output-dir PATH |
Output directory (default: same directory as the video) |
--output-name-suffix SUFFIX |
Suffix appended to the video filename stem (default: -poster) |
--json |
Emit machine-readable JSON to stdout |
--no-badges |
Disable automatic technical badges (4K, HD, HDR) on the poster |
--fanart |
Generate an additional clean 16:9 fanart image (for Infuse/Emby) with -fanart suffix |
info Subcommand
Read and display the creation metadata embedded in a generated poster image:
video-thumbnail-creator info /path/to/poster.jpg
Default output:
Poster Metadata:
Version 1.3.0
Source frame
Frame Index 12
Crop Position center-left
Format poster
Mode auto
Input File 2025-11-01_Herbst-Spaziergang.mp4
Overlay Title Herbst-Spaziergang
Category Videoschnittstudio Silvan Kurmann
Note 1. November 2025
AI Reasoning Sharp, well-lit frame with child running towards camera…
Created 2026-02-26T14:30:00
JSON output (--json):
video-thumbnail-creator info --json /path/to/poster.jpg
{
"vtc_version": "1.3.0",
"source": "frame",
"frame_index": 12,
"crop_position": "center-left",
"format": "poster",
"mode": "auto",
"input_file": "2025-11-01_Herbst-Spaziergang.mp4",
"overlay_title": "Herbst-Spaziergang",
"overlay_category": "Videoschnittstudio Silvan Kurmann",
"overlay_note": "1. November 2025",
"ai_reasoning": "Sharp, well-lit frame with child running towards camera...",
"created_at": "2026-02-26T14:30:00"
}
The embedded metadata enables future re-generation of posters (e.g. with a new template) without needing AI calls or interactive prompts.
Poster Format (2:3)
The default poster format produces a 1080×1620 image composed of two sections:
┌──────────────────┐
│ │
│ 1:1 crop of │ ← 1080×1080 square crop (with subtle vignette)
│ selected frame │
│ │
├──────────────────┤ ← 10px separator line (#2a2a2a)
│▓ [category] ▓│ ← optional category text or logo above title
│▓ ▓│
│▓ Title ▓│ ← bold, auto-sized (40–72px), centered, drop shadow
│▓ ▓│
│▓ note ▓│ ← optional small note text at bottom-right
│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓│ ← solid dark background (#1a1a1a)
└──────────────────┘
2:3
- Top section (1080×1080): A 1:1 square crop of the selected high-res frame with a
subtle radial vignette effect (15–20% edge darkening). The horizontal crop position
(
left/center-left/center/center-right/right) can be set directly with--crop-position, chosen by AI inauto/suggestmode, or prompted from the user inmanualmode. - Bottom section: The text area fills the entire bottom section edge-to-edge. Layout
details depend on the active style (see Poster Styles):
- Title (optional
--overlay-title): bold, auto-sized, word-wrapped, centered, white with drop shadow. - Note (optional
--overlay-note): small text centered at the bottom of the text area. - Category / Logo (optional): placement varies by style — in the text area above the title, or as an overlay band at the top of the image.
- Title (optional
Supported Image Formats
When <input_path> points to an image file, the mosaic/frame-extraction pipeline
is skipped and the image is used directly as the high-res source for crop-position
selection and poster/landscape composition.
| Format | Notes |
|---|---|
JPEG (.jpg, .jpeg) |
Used directly (copied as-is for sRGB images) |
PNG (.png) |
Converted to JPEG via Pillow |
TIFF (.tiff, .tif) |
Wide-gamut (Rec.2020, P3) converted to sRGB via sips (macOS) |
HEIC/HEIF (.heic, .heif) |
Converted to sRGB JPEG via sips (macOS) |
Note: Wide-gamut color space conversion (Rec.2020, Display P3) requires macOS with
sips. On other systems, Pillow is used as a fallback but may not handle wide-gamut TIFF files correctly.
Examples
Manual mode
video-thumbnail-creator extract /path/to/video.mp4 --mode manual
The mosaic of 20 frames is opened in the system image viewer. Enter the frame number (0–19) at the prompt. In poster format, you are also prompted to choose a crop position. The result is saved next to the video.
Manual mode — landscape with text overlay
video-thumbnail-creator extract /path/to/video.mp4 \
--mode manual \
--format landscape \
--overlay-title-from-filename
Automatic AI mode (poster, default format)
video-thumbnail-creator extract /path/to/video.mp4 \
--mode auto \
--description "Documentary about rocket launches" \
--overlay-title "2025-05-15 – Starship IFT-7"
The AI selects the best frame from the 20-frame mosaic, then chooses the optimal 1:1 crop position for the poster, and renders the text in the blurred bottom area.
Poster with category, logo, and note
# Category text above title, note at bottom-right
video-thumbnail-creator extract /path/to/video.mp4 \
--mode auto \
--overlay-title "Starship IFT-7" \
--overlay-category "Space Exploration" \
--overlay-note "2025-05-15"
# Category logo (wide, centered above title)
video-thumbnail-creator extract /path/to/video.mp4 \
--mode auto \
--overlay-title "Starship IFT-7" \
--overlay-category-logo /path/to/channel-logo-wide.png \
--overlay-note "Episode 7"
# Square/portrait logo (left of title)
video-thumbnail-creator extract /path/to/video.mp4 \
--mode manual \
--overlay-title "My Documentary" \
--overlay-category-logo /path/to/icon-square.png
Note on
--overlay-categoryvs--overlay-category-logo: If both are provided, the logo takes precedence and a warning is printed to stderr.
Using embedded cover art (MP4/M4V/MOV)
# Use embedded artwork if present, fall back to frame extraction
video-thumbnail-creator extract /path/to/video.mp4 \
--embedded-image prefer \
--mode auto
# Always skip frame extraction and use embedded cover art
video-thumbnail-creator config set defaults.embedded_image prefer
The --embedded-image option controls how embedded cover art and sidecar images are handled:
prefer: Use the embedded image or sidecar image if found; otherwise extract frames normally.ignore: Always extract frames, even if embedded cover art or sidecar images exist.ask(default): Prompt the user when an embedded image or sidecar image is found.
Smart re-generation from VTC-generated embedded posters
When the embedded poster was previously created by video-thumbnail-creator, the tool reads
its stored EXIF metadata to automatically re-extract the original raw frame from the video
at full resolution, then composes a fresh poster from scratch using the current template and
style settings. This avoids using the already-rendered image (with text overlays, crop, and
vignette baked in) as a composition source.
The following metadata fields are reused from the existing poster:
frame_index— which frame (0–19) was originally selectedcrop_position— which crop position was used (can be overridden with--crop-position)ai_reasoning— the original AI reasoning, preserved in the new poster's metadata
If the embedded image was not generated by video-thumbnail-creator (e.g. it was set
externally), or its frame_index is unavailable, the tool falls back to using the embedded
image directly as the source.
Note: Embedded image detection is only supported for MP4, M4V, and MOV containers. For other formats the tool falls through to sidecar detection and then frame extraction. The
--embedded-imageoption is independent from--mode; the mode only affects how the crop position is determined after the source image is resolved.
Using sidecar thumbnail images
When extracting from a video file (e.g. video.mp4), the tool also checks for
existing thumbnail images ("sidecar" files) in the same directory with the same
filename stem:
video.jpg / video.jpeg / video.png / video.tiff / video.tif
The detection priority for video input is:
- Embedded image (inside the video container)
- Sidecar image (next to the video file)
- Frame extraction (mosaic flow)
The --embedded-image option controls sidecar image handling the same way it
controls embedded image handling.
# Use sidecar image if present, fall back to frame extraction
video-thumbnail-creator extract /path/to/video.mp4 \
--embedded-image prefer \
--mode auto
# Always use frames, ignore any sidecar images
video-thumbnail-creator extract /path/to/video.mp4 \
--embedded-image ignore \
--mode manual
When a sidecar image is used, the JSON output has "source": "sidecar":
{
"poster_path": "/path/to/video-poster.jpg",
"frame_index": -1,
"mode": "auto",
"format": "poster",
"source": "sidecar",
"reasoning": "Sidecar image file was used as source. | Crop: ...",
"crop_position": "center",
"input_path": "/path/to/video.mp4"
}
Using --crop-position to skip interactive and AI crop selection
# Set crop position directly — skips AI crop selection (saves API costs)
video-thumbnail-creator extract /path/to/video.mp4 \
--mode auto \
--crop-position center
# Useful for batch processing where crop position is already known
for f in /videos/*.mp4; do
video-thumbnail-creator extract "$f" --mode auto --crop-position center-left
done
The --crop-position option accepts: left, center-left, center, center-right, right.
When provided with --format poster, it skips both the interactive crop prompt
(manual mode) and the AI crop selection call (auto/suggest modes).
Image file input (JPEG, PNG, TIFF, HEIC)
# Create poster from a TIFF image (auto color space conversion on macOS)
video-thumbnail-creator extract /path/to/photo.tiff \
--mode auto \
--overlay-title "Herbst-Spaziergang"
# Create poster from JPEG with manual crop
video-thumbnail-creator extract /path/to/photo.jpg \
--mode manual \
--overlay-title "Mein Foto" \
--overlay-category "Familie Kurmann"
For image input, ffmpeg/ffprobe are not required. The image itself is used as the high-res source; only crop-position selection and poster composition run.
Semi-automatic suggest mode
video-thumbnail-creator extract /path/to/video.mp4 \
--mode suggest \
--output-dir /tmp/thumbs \
--output-name-suffix -thumb
In poster format, after the frame is selected the AI also suggests a crop position, which you can confirm or override at the prompt.
Configuration
Settings can be stored in ~/.config/video-thumbnail-creator/config.toml so you
don't have to pass them on every invocation. The directory and file are created
automatically on the first config set.
Priority order (highest to lowest)
- Explicit CLI arguments
- Config file values
- Built-in defaults
Commands
# Store a value
video-thumbnail-creator config set claude.api_key "sk-ant-..."
# Read a single value
video-thumbnail-creator config get claude.model
# Show all stored values
video-thumbnail-creator config list
Allowed keys
| Key | Description | Default |
|---|---|---|
claude.api_key |
Anthropic Claude API key | (none) |
claude.model |
Claude model name | claude-sonnet-4-5 |
tools.ffmpeg |
Path to ffmpeg binary |
ffmpeg |
tools.ffprobe |
Path to ffprobe binary |
ffprobe |
defaults.output_name_suffix |
Suffix for output filename | -poster |
defaults.mode |
Default selection mode | manual |
defaults.format |
Default output format (poster or landscape) |
poster |
defaults.embedded_image |
Default embedded image handling (prefer, ignore, ask) |
ask |
Example config.toml
[claude]
api_key = "sk-ant-..."
model = "claude-sonnet-4-5"
[tools]
ffmpeg = "ffmpeg"
ffprobe = "ffprobe"
[defaults]
output_name_suffix = "-poster"
mode = "manual"
format = "poster"
embedded_image = "ask"
Output filename
The output filename is formed by appending the suffix to the video file stem:
2025-05-15_Starship_IFT7.mkv + suffix "-poster" → 2025-05-15_Starship_IFT7-poster.jpg
Output
Default (no --json)
stdout contains only the absolute path of the created image:
/path/to/poster.jpg
All status messages, progress info, and AI reasoning are written to stderr.
JSON mode (--json)
{
"poster_path": "/path/to/poster.jpg",
"frame_index": 12,
"mode": "auto",
"format": "poster",
"source": "frame",
"reasoning": "Sharp, well-lit frame that is representative of the content.",
"crop_position": "center-left",
"overlay_title": "My Video Title",
"overlay_category": "Space Exploration",
"overlay_note": "2025-05-15",
"input_path": "/path/to/video.mp4"
}
The overlay_category field is only present when --overlay-category is provided (and --overlay-category-logo
is not used alongside it). The overlay_note field is only present when --overlay-note is provided.
When embedded cover art is used as the source:
{
"poster_path": "/path/to/video-poster.jpg",
"frame_index": -1,
"mode": "auto",
"format": "poster",
"source": "embedded",
"reasoning": "Embedded cover art was used as source image. | Crop: ...",
"crop_position": "center",
"input_path": "/path/to/video.mp4"
}
When the embedded image is a VTC-generated poster with valid metadata, the tool re-extracts
the original frame from the video. In this case, the output looks like a normal frame-based result
with "source": "frame" and the actual frame_index:
{
"poster_path": "/path/to/video-poster.jpg",
"frame_index": 12,
"mode": "auto",
"format": "poster",
"source": "frame",
"reasoning": "Sharp, well-lit frame with child running towards camera…",
"crop_position": "center-left",
"input_path": "/path/to/video.mp4"
}
When an image file is used as the source:
{
"poster_path": "/path/to/photo-poster.jpg",
"frame_index": -1,
"mode": "auto",
"format": "poster",
"source": "image",
"reasoning": "Image file was used as source. | Crop: ...",
"crop_position": "center",
"input_path": "/path/to/photo.tiff"
}
The source field is "frame" when a video frame was used, "embedded" when
embedded cover art was used directly (non-VTC image or no usable frame metadata),
"sidecar" when a sidecar image file was used, and "image" when an image file
was used as input.
When source is "embedded", "sidecar", or "image", frame_index is -1.
When a VTC-generated embedded poster is re-generated via its stored frame_index,
source is "frame" and frame_index holds the original frame number.
When --fanart is used, the JSON output includes an additional fanart_path field:
{
"poster_path": "/path/to/video-poster.jpg",
"fanart_path": "/path/to/video-fanart.jpg",
...
}
Fanart Image (--fanart)
The --fanart flag generates an additional clean 16:9 JPEG alongside the
normal poster or landscape output. This image has no text overlays, no badges,
and no gradients — just the pure source frame scaled to 16:9. It is intended for
media servers such as Infuse and Emby that look for a file with a
-fanart suffix.
# Generates both "My Video-poster.jpg" and "My Video-fanart.jpg"
videos-thumbnail-creator extract "My Video.mp4" --fanart
Output resolution:
- 4K source (width ≥ 3840 or height ≥ 2160): 3840 × 2160
- Otherwise: 1920 × 1080
Non-16:9 sources: A blurred background fill is applied automatically (same visual approach as the existing frame extraction) so the output is always exactly 16:9 without black bars or stretching.
Poster Styles
All visual design constants (colors, fonts, layout, badge placement) are defined
as built-in named styles. Select a style with --style <name> (default: internet).
List available styles
video-thumbnail-creator styles
| Style | Description |
|---|---|
internet |
Category header overlaid at top of image, matte black text area, large bold fonts, badges on image — optimised for YouTube/web video posters |
Examples
# Default style (`internet`)
video-thumbnail-creator extract video.mp4 \
--overlay-title "My Title" \
--overlay-category "My Channel" \
--overlay-title "My Video Title" \
--overlay-note "15. March 2025"
Exit Codes
| Code | Meaning |
|---|---|
0 |
Success |
1 |
General error (file not found, ffmpeg missing, etc.) |
2 |
No selection made (user cancelled) |
3 |
AI selection failed (no API key, timeout, invalid response) |
Integration
Library Integration (Python Import)
video-thumbnail-creator ships a full Python API at three levels — pick the
one that fits your use case:
CLI (Terminal user)
└→ High-Level API: create_thumbnail() – one call, everything automatic
└→ Mid-Level API: ThumbnailSession – multi-step, caller controls each step
└→ Low-Level API: extract_frames(), compose_poster(), etc. – individual building blocks
High-Level API – create_thumbnail()
One call for fully automatic thumbnail creation (AI selects frame and crop position):
from video_thumbnail_creator import create_thumbnail
result = create_thumbnail(
"/path/to/video.mp4",
overlay_title="My Film",
output_dir="/output/",
fanart=True,
)
print(result.poster_path) # absolute path to the poster JPEG
print(result.fanart_path) # absolute path to the fanart JPEG (or None)
print(result.reasoning) # AI explanation
The returned poster JPEG already contains readable VTC EXIF metadata (for
example frame_index, crop_position, overlays, and AI reasoning when
available), so later rebuilds can call read_metadata() directly on the file.
Works with image files too (JPEG, PNG, TIFF, HEIC) — ffmpeg is not required:
result = create_thumbnail("/path/to/cover.jpg", format="poster")
Mid-Level API – ThumbnailSession
ThumbnailSession gives you full control over each step. Use the context
manager for automatic cleanup of temporary files.
ThumbnailSession.compose() automatically embeds VTC metadata into the main
JPEG it writes. The session keeps the selected frame and any AI suggestions it
generated, so compose() can reuse that context without requiring a separate
manual embed_metadata() call. Fields that are not available for the current
input are omitted; fanart keeps its previous behavior.
How the mid-level session logic works
ThumbnailSession(input_path)initializes the working state once:- for video input, it reads
video_properties, extracts the preview frames, and buildsmosaic_path - for image input, it prepares
highres_pathimmediately and marks the session as ready forcompose()
- for video input, it reads
suggest_frame()asks the AI for a frame suggestion and stores that suggestion plus its reasoning inside the session.select_frame(frame_index)makes the frame choice effective by extracting the high-resolution source frame tohighres_path. If you select a different frame than the last AI suggestion, the stored frame reasoning is cleared so later metadata reflects the effective choice rather than stale AI context.suggest_crop()asks the AI for a poster crop suggestion for the currently selectedhighres_pathand stores both the suggested crop position and its reasoning in the session.compose()renders the final main JPEG from the current session state and automatically embeds VTC metadata by calling the officialbuild_metadata()/embed_metadata()helpers internally.
That means another Python application can treat ThumbnailSession as a
stateful workflow object: first gather or choose frame/crop information, then
call compose() once and receive a JPEG whose metadata already describes the
effective session state.
Metadata behavior of compose()
- The main composed JPEG always gets VTC EXIF metadata.
- Metadata is derived from the current session state plus the current compose arguments.
- The stored fields include the effective
frame_index,crop_position,format,mode,input_file, overlays, optionalai_reasoning, and an optionalposter_templatelabel when the template dict provides a field such asname. - If the final frame/crop matches the stored AI suggestions, the metadata keeps
the corresponding AI reasoning and marks the mode as
auto. - If the caller overrides those suggestions manually, the metadata reflects the manual outcome instead of preserving outdated AI hints.
fanart=Truestill creates the additional fanart image, but fanart keeps its previous behavior and is not treated as a second metadata-bearing poster output.
Relationship to the high-level API
create_thumbnail() is a thin convenience wrapper around the same
ThumbnailSession workflow. It benefits from the same stored selection context
and automatic metadata embedding; there is no separate metadata implementation
for the high-level path.
Automatic — AI decides everything, step by step:
from video_thumbnail_creator import ThumbnailSession
with ThumbnailSession("/path/to/video.mp4") as session:
suggestion = session.suggest_frame(title="My Film", description="A documentary")
session.select_frame(suggestion["frame_index"])
crop = session.suggest_crop()
result = session.compose(
crop_position=crop["crop_position"],
overlay_title="My Film",
output_dir="/output/",
)
Suggest — AI suggests, caller confirms or overrides:
with ThumbnailSession("/path/to/video.mp4") as session:
print(session.mosaic_path) # show the mosaic to the user
suggestion = session.suggest_frame()
# ... show suggestion to user, let them confirm or pick a different index ...
chosen_index = int(input(f"Frame [{suggestion['frame_index']}]: ") or suggestion["frame_index"])
session.select_frame(chosen_index)
result = session.compose(crop_position="center", output_dir="/output/")
Manual — caller decides everything, mosaic is just a visual aid:
with ThumbnailSession("/path/to/video.mp4") as session:
# Display session.mosaic_path to the user, then:
session.select_frame(7)
result = session.compose(
crop_position="center-left",
overlay_title="My Film",
format="poster",
output_path="/output/my-film-poster.jpg",
fanart=True,
)
Image input — no frame extraction needed:
with ThumbnailSession("/path/to/cover.jpg") as session:
# session._frame_selected is already True; call compose() directly
result = session.compose(format="poster", output_dir="/output/")
Low-Level API – Individual Functions
Use the building blocks directly when you need maximum control:
from video_thumbnail_creator import (
get_video_properties,
extract_frames,
create_mosaic,
extract_single_frame_highres,
compose_poster,
compose_fanart,
detect_badges,
)
props = get_video_properties("/path/to/video.mp4")
frame_paths = extract_frames("/path/to/video.mp4", "/tmp/frames/")
mosaic = create_mosaic(frame_paths, "/tmp/mosaic.jpg")
highres = extract_single_frame_highres("/path/to/video.mp4", 5, "/tmp/highres.jpg")
badges = detect_badges(props)
compose_poster(highres, "center", "My Title", "/output/poster.jpg", badges=badges)
compose_fanart(highres, "/output/fanart.jpg", is_4k=props["is_4k"])
CLI Integration (Subprocess)
Because stdout contains only the file path (or clean JSON), this tool is also
easy to integrate via subprocess when you cannot import it directly:
import subprocess, json
result = subprocess.run(
["video-thumbnail-creator", "extract", video_path, "--mode", "auto", "--json"],
capture_output=True, text=True, check=True,
env={**os.environ, "CLAUDE_API_KEY": "sk-ant-..."},
)
data = json.loads(result.stdout)
poster_path = data["poster_path"]
Recent Changes
This section lists the release notes for the three most recent versions. For older versions, see the Releases page or the respective version on PyPI.
v1.7.2
- Hyphen-aware text wrapping: Compound words joined by hyphens (e.g. "Schwangerschaftsyoga-Kurs") now break after the hyphen when the full word does not fit on one line — resulting in "Schwangerschaftsyoga- / Kurs" instead of overflowing or being cut off.
v1.7.1
- Mid-level API metadata parity:
ThumbnailSession.compose()now embeds the same readable VTC EXIF metadata as the CLI path, so poster JPEGs created through the Python API preserveframe_index,crop_position, overlays,format,mode, and optional AI reasoning for later rebuilds. - High-level API inherits the fix:
create_thumbnail()now benefits automatically from the improvedThumbnailSessionflow instead of maintaining a separate metadata implementation. - README/API clarity: The library documentation now explains the mid-level session workflow, retained selection context, and when poster metadata is embedded automatically.
v1.7.0
- Smart poster re-generation: When a video already has a VTC-generated embedded poster, the tool now reads the stored
frame_indexandcrop_positionfrom its EXIF metadata and re-extracts the original raw frame from the video at full resolution. This ensures a clean composition source (no baked-in overlays or vignette) when regenerating with a new template or style — with no extra AI calls needed. - The original
crop_positionfrom the metadata is used automatically; it can still be overridden with--crop-positionor confirmed interactively insuggestmode. - If the embedded image was not generated by vtc (no
frame_indexmetadata), the tool falls back to the previous behavior of using the embedded image directly.
v1.6.1
- Internet poster layout refined: Long titles are now better contained so they no longer crowd the date or note area at the bottom
- Protected note area: The overlay note/date now has its own reserved space in the lower section, improving readability and visual balance
- Stronger category header: The category label at the top is larger and better proportioned relative to the title, while still scaling down automatically for longer labels
- Cleaner visual spacing: The top header band and lower text area were rebalanced for a calmer overall composition in the
internetstyle
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file video_thumbnail_creator-1.7.3.tar.gz.
File metadata
- Download URL: video_thumbnail_creator-1.7.3.tar.gz
- Upload date:
- Size: 93.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eea64cc1355424525fa5dacd519cf3cdac2a253a27ba7bc9d330df0fcc593265
|
|
| MD5 |
1c0c9ac6b8a3e71878814eafff8a8816
|
|
| BLAKE2b-256 |
09dc52819253dd7597c90c214d68fb37ea937685e8029ffac277940bd98b47ba
|
Provenance
The following attestation bundles were made for video_thumbnail_creator-1.7.3.tar.gz:
Publisher:
publish.yml on kurmann/video-thumbnail-creator
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
video_thumbnail_creator-1.7.3.tar.gz -
Subject digest:
eea64cc1355424525fa5dacd519cf3cdac2a253a27ba7bc9d330df0fcc593265 - Sigstore transparency entry: 1066035852
- Sigstore integration time:
-
Permalink:
kurmann/video-thumbnail-creator@8ca133423d27e444a053a4a3e87434eee0ae37df -
Branch / Tag:
refs/tags/v1.7.3 - Owner: https://github.com/kurmann
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@8ca133423d27e444a053a4a3e87434eee0ae37df -
Trigger Event:
release
-
Statement type:
File details
Details for the file video_thumbnail_creator-1.7.3-py3-none-any.whl.
File metadata
- Download URL: video_thumbnail_creator-1.7.3-py3-none-any.whl
- Upload date:
- Size: 65.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7dcc36255a0640d499fdd0be6d0f5f78902f5eba8e46526e41dc49c63eb18da4
|
|
| MD5 |
02f62f680d4a410fba6bda8a59cc46e2
|
|
| BLAKE2b-256 |
c4fec70ce9df60e621de5d507f41068cc497957ee0dae4bab9b9892bfdd4b779
|
Provenance
The following attestation bundles were made for video_thumbnail_creator-1.7.3-py3-none-any.whl:
Publisher:
publish.yml on kurmann/video-thumbnail-creator
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
video_thumbnail_creator-1.7.3-py3-none-any.whl -
Subject digest:
7dcc36255a0640d499fdd0be6d0f5f78902f5eba8e46526e41dc49c63eb18da4 - Sigstore transparency entry: 1066035857
- Sigstore integration time:
-
Permalink:
kurmann/video-thumbnail-creator@8ca133423d27e444a053a4a3e87434eee0ae37df -
Branch / Tag:
refs/tags/v1.7.3 - Owner: https://github.com/kurmann
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@8ca133423d27e444a053a4a3e87434eee0ae37df -
Trigger Event:
release
-
Statement type: