AI-powered video editing agent that creates short-form content from any video
Project description
๐ฌ vidmagikAgent
An AI-powered video editing agent that automatically creates short-form content from any video.
Paste a URL โ AI downloads, analyzes, detects highlights, edits, and exports vertical shorts for TikTok, YouTube Shorts & Instagram Reels.
Quick Start โข How It Works โข Web App โข MCP Backend โข Custom Effects โข Docker โข Testing
Disclaimer - The Quality of the output video is dependant on the model you use plus your extra instructions although a model may handle tool calling very well it may not be able to produce high quality video edits in my experience its best to use a VLLM with a large context window and good reasoning capabilities or a fine tuned model for video editing tasks doesnt have to generate video just understand the context and make good decisions.
๐ Overview
vidmagikAgent is a full-stack AI video editing agent that turns any video into polished, platform-ready short-form content โ completely autonomously.
The project consists of three major components:
| Component | What It Does |
|---|---|
AI Shorts Creator (src/app/) |
A NiceGUI web application โ the main user-facing product. Paste a video URL, give optional creative instructions, and the AI agent handles everything: downloading, highlight detection, scene analysis, intelligent clip selection, vertical reframing, effects, and export. |
vidmagik-mcp Backend (src/api/main.py) |
An MCP (Model Context Protocol) server exposing 60+ video/audio editing tools built on MoviePy. This is the "hands" the AI agent uses to actually edit video. Supports stdio, SSE, and StreamableHTTP transports. |
Custom Effects Library (src/api/custom_fx/) |
9 production-ready visual effects โ Matrix rain, kaleidoscope, chroma key, face-tracking auto-framing, 3D rotating cube, and more โ plus an optical-flow highlight detection module that goes beyond MoviePy's built-in effects. |
Key Capabilities
- ๐ค Fully Autonomous โ The AI agent makes all creative decisions: which moments to pick, how to crop, what effects to apply, when to cut.
- ๐ Any Video Source โ Downloads from YouTube, TikTok, Instagram, Twitter, and 1000+ sites via yt-dlp.
- ๐ฑ Vertical-First โ Auto-framing with face detection (Haar Cascades) converts 16:9 โ 9:16 while keeping subjects centered.
- ๐ Optical-Flow Highlight Detection โ Automatically finds high-motion/action moments using dense optical flow (Farneback algorithm) for intelligent clip selection.
- ๐ง Any LLM โ Works with LM Studio, Gemini, OpenAI, Anthropic, Ollama, or any OpenAI-compatible API via LiteLLM. Auto-detects provider from environment variables.
- ๐จ 60+ Editing Tools โ Full MoviePy API + 9 custom effects + highlight detection, all exposed as MCP tools the agent can call.
- ๐ก Real-Time Streaming UI โ Watch the agent think, call tools, and produce results in a live chat-style activity log.
- ๐ Flexible Transport โ MCP server supports stdio (local subprocess), SSE, and StreamableHTTP for scalable remote deployments.
- ๐ณ Docker Ready โ Multi-service
docker-compose.ymlfor one-command deployment.
๐ Quick Start
Prerequisites
| Requirement | Why |
|---|---|
| Python โฅ 3.13 | Runtime |
| FFmpeg | Video/audio encoding & decoding (used by MoviePy) |
| ImageMagick | Text rendering (for TextClip) |
Install
vidmagikAgent is published on PyPI:
# Install from PyPI
pip install vidmagik-agent
# Or with uv
uv pip install vidmagik-agent
This installs all dependencies and registers the vidmagik CLI command.
๐ฅ CLI Usage
After installing, launch the web app with a single command:
vidmagik
This starts the AI Shorts Creator web UI at http://127.0.0.1:3000.
Note: Python โฅ 3.13 is required. System dependencies (FFmpeg, ImageMagick) must be installed separately.
Environment Variables (.env)
Copy .env.example to .env and configure your LLM provider. Docker Compose reads this automatically.
cp .env.example .env
The system auto-detects your LLM provider from environment variables:
| Variable | Example | Notes |
|---|---|---|
LM_STUDIO_API_BASE |
http://localhost:1234/v1 |
LM Studio local LLM (preferred) |
LLM_MODEL |
lm_studio/producer/model-name |
Model name |
GEMINI_API_KEY |
your-gemini-key |
Auto-selects gemini/gemini-2.0-flash |
OPENAI_API_KEY |
sk-... |
Auto-selects gpt-4o |
ANTHROPIC_API_KEY |
sk-ant-... |
Auto-selects anthropic/claude-sonnet-4-20250514 |
Using the Web App
- Paste a video URL โ YouTube, TikTok, Instagram, or any supported site.
- Add instructions (optional) โ e.g., "Focus on the funniest moments, make 3 shorts".
- Click "Create Shorts" โ The agent downloads the video, detects highlights via optical flow, picks the best clips, reframes for vertical, applies effects, and exports.
- Download your shorts โ Exported files appear in the "Exported Shorts" panel with one-click download buttons.
Note: LLM settings can be configured via environment variables (recommended) or overridden in the UI's "LLM Settings" panel. Environment auto-detection means zero config for most setups.
โ๏ธ How It Works
sequenceDiagram
participant User
participant WebUI as AI Shorts Creator<br/>(NiceGUI)
participant YTD as yt-dlp
participant LLM as LLM Provider<br/>(LM Studio / Cloud)
participant MCP as vidmagik-mcp<br/>Backend
User->>WebUI: Paste URL + optional instructions
WebUI->>YTD: Download source video
YTD-->>WebUI: Local file path
WebUI->>LLM: System prompt + video path
loop Agentic Loop (up to 50 iterations)
LLM-->>WebUI: Tool call request
WebUI->>MCP: Execute tool via MCP
MCP-->>WebUI: Tool result
WebUI->>LLM: Feed result back
end
LLM-->>WebUI: Final summary of creative decisions
WebUI-->>User: Download links for finished shorts
The Agentic Loop
The heart of the project is the agentic loop in src/app/mcp_client.py. Here's what happens when you click "Create Shorts":
- Download โ
yt-dlpdownloads the video to themedia/directory. - System Prompt โ The app sends a system prompt to the configured LLM, instructing it to act as an expert video editor.
- Tool Calling Loop โ The LLM decides what to do and calls MCP tools:
video_file_clipโ Load the downloaded videotools_detect_highlightsโ Optical-flow analysis to find high-motion momentssubclipโ Extract the best 5โ15 second segments around each highlightvfx_auto_framingโ Crop to 9:16 vertical with face trackingconcatenate_video_clipsโ Combine all clips into one highlight reelvfx_fade_in/vfx_fade_outโ Smooth transitionswrite_videofileโ Export tomedia/short.mp4
- Summary โ The LLM explains its creative choices (why it picked those moments, what effects it applied).
- UI Updates โ Every tool call and result streams to the Agent Activity log in real time.
Architecture Overview
vidmagikAgent/
โโโ src/ # ๐ฆ SOURCE PACKAGE
โ โโโ __init__.py # Package marker
โ โโโ main.py # CLI entry point (launches web app)
โ โโโ inspect_moviepy.py # MoviePy installation checker
โ โ
โ โโโ api/ # โ๏ธ BACKEND
โ โ โโโ __init__.py # Package init
โ โ โโโ main.py # vidmagik-mcp server (60+ MCP tools, prompts, upload route)
โ โ โโโ custom_fx/ # ๐จ EFFECTS LIBRARY
โ โ โโโ __init__.py # Re-exports all effects + highlight detection
โ โ โโโ auto_framing.py # Face-tracking vertical crop
โ โ โโโ chroma_key.py # Green screen removal
โ โ โโโ clone_grid.py # Grid of video clones
โ โ โโโ highlight_detect.py # Optical-flow highlight detection
โ โ โโโ kaleidoscope.py # Radial symmetry
โ โ โโโ kaleidoscope_cube.py # Kaleidoscope + rotating cube combo
โ โ โโโ matrix.py # "Matrix" digital rain overlay
โ โ โโโ quad_mirror.py # Four-quadrant mirror
โ โ โโโ rgb_sync.py # RGB channel split / glitch
โ โ โโโ rotating_cube.py # 3D rotating cube with video mapping
โ โ
โ โโโ app/ # ๐ฅ FRONTEND โ AI Shorts Creator
โ โโโ __init__.py # Package init
โ โโโ main.py # NiceGUI web UI (dark mode, real-time log)
โ โโโ mcp_client.py # MCP client + LLM agentic loop (LiteLLM)
โ โโโ Dockerfile # Frontend Docker image
โ
โโโ tests/ # ๐งช TEST SUITE
โ โโโ test_e2e.py # Backend end-to-end tests
โ โโโ test_nicegui_integration.py # NiceGUI integration tests
โ โโโ frontend_e2e_test.py # Frontend end-to-end tests
โ
โโโ media/ # ๐ Working directory (gitignored)
โโโ Dockerfile # ๐ณ Backend Docker image
โโโ docker-compose.yml # ๐ณ Multi-service orchestration
โโโ pyproject.toml # ๐ฆ Dependencies, packaging & CLI config
โโโ uv.lock # ๐ Locked dependency versions
โโโ .env.example # ๐ Environment variable template
โโโ CUSTOM_FX.md # ๐ Custom effects documentation
โโโ LICENSE # โ๏ธ MIT License
๐ฅ Web App
UI Sections
The AI Shorts Creator (src/app/main.py) is a dark-themed NiceGUI single-page application with four main sections:
1. LLM Settings (Collapsible)
Configure the LLM backend. Can be preset via environment variables or adjusted in the UI:
| Field | Env Var | Notes |
|---|---|---|
| API Base URL | LM_STUDIO_API_BASE |
Any OpenAI-compatible endpoint |
| API Key | LM_STUDIO_API_KEY |
Auto-detected from provider env vars |
| Model | LLM_MODEL |
ibm/granite-4-h-tiny, gpt-4o, gemini-2.0-flash, etc. |
2. Video Source
- Video URL โ Paste any URL supported by yt-dlp.
- Instructions (optional) โ Natural language creative direction for the agent.
- "Create Shorts" button โ Kicks off the entire pipeline.
3. Agent Activity Log
A scrolling, chat-style log that streams the agent's work in real time:
- ๐ง Thinking โ The LLM's reasoning (gray)
- ๐ง Tool Calls โ Which MCP tool is being called and with what arguments (violet)
- โ Tool Results โ The result of each tool call (cyan)
- ๐ฌ Messages โ The LLM's final summary (green)
- โ Errors โ Any failures (red)
4. Exported Shorts
Download cards for each exported video file, with movie icon and one-click download buttons.
MCP Client (src/app/mcp_client.py)
The MCPVideoClient class handles the full lifecycle:
- Connection โ Connects to the vidmagik-mcp backend via StreamableHTTP (when
MCP_SERVER_URLis set, e.g. in Docker) or spawns it as a local subprocess over stdio transport. - Schema Discovery โ Fetches all MCP tool schemas and converts them to OpenAI function-calling format.
- Video Download โ Uses
yt-dlpto download videos with best quality MP4 format. - Agentic Loop โ Iterates up to 50 rounds of LLM โ tool call โ result โ LLM, yielding typed events for the UI to render.
- LiteLLM Integration โ Translates MCP tool schemas to OpenAI format so any OpenAI-compatible model can drive the agent.
- Auto-Config โ Resolves LLM provider automatically from environment variables (LM Studio โ Gemini โ OpenAI โ Anthropic).
โ๏ธ MCP Backend
The backend (src/api/main.py) is a FastMCP server that wraps the entire MoviePy video editing library as MCP tools. It can run in three transport modes:
# HTTP โ for standalone use or StreamableHTTP MCP clients
uv run src/api/main.py --transport http --host 0.0.0.0 --port 8080
# SSE โ for Server-Sent Events MCP clients
uv run src/api/main.py --transport sse
# stdio (default)โ for subprocess-based MCP clients (used by the AI Shorts Creator locally)
uv run src/api/main.py --transport stdio
Clip Management System
All clips live in an in-memory store (CLIPS dict) with a max of 100 concurrent clips. Every tool that creates or transforms a clip returns a UUID handle:
video_file_clip("video.mp4") โ "a1b2-..."
subclip("a1b2-...", 10.0, 25.0) โ "c3d4-..."
vfx_fade_in("c3d4-...", 1.0) โ "e5f6-..."
write_videofile("e5f6-...", "output.mp4") โ "Successfully wrote video to output.mp4"
Full Tool Inventory (60+ tools)
Clip Management (5 tools)
| Tool | Description |
|---|---|
validate_path(filename) |
Path validation to prevent directory traversal |
register_clip(clip) |
Register a clip and return its UUID |
get_clip(clip_id) |
Retrieve a clip by UUID |
list_clips() |
List all loaded clips and their types |
delete_clip(clip_id) |
Remove a clip from memory |
Video I/O (10 tools)
| Tool | Description |
|---|---|
video_file_clip(filename, audio, fps_source, target_resolution) |
Load a video file |
image_clip(filename, duration, transparent) |
Load an image as a clip |
image_sequence_clip(sequence, fps, durations, with_mask) |
Create clip from image sequence or folder |
text_clip(text, font, font_size, color, bg_color, size, method, duration) |
Create a text clip (needs ImageMagick) |
color_clip(size, color, duration) |
Create a solid color clip |
credits_clip(creditfile, width, color, stroke_color, ...) |
Scrolling credits from a text file |
subtitles_clip(filename, encoding, font, font_size, color) |
Subtitles from .srt file |
write_videofile(clip_id, filename, fps, codec, audio_codec, bitrate, preset, ...) |
Export video to file |
write_gif(clip_id, filename, fps, loop) |
Export clip as GIF |
tools_ffmpeg_extract_subclip(filename, start_time, end_time, targetname) |
Fast FFmpeg extraction (no decoding) |
Audio I/O (2 tools)
| Tool | Description |
|---|---|
audio_file_clip(filename, buffersize) |
Load an audio file |
write_audiofile(clip_id, filename, fps, nbytes, codec, bitrate) |
Export audio to file |
Clip Configuration (6 tools)
| Tool | Description |
|---|---|
set_position(clip_id, x, y, pos_str, relative) |
Set clip position |
set_audio(clip_id, audio_clip_id) |
Set video's audio track |
set_mask(clip_id, mask_clip_id) |
Set transparency mask |
set_start(clip_id, t) |
Set start time |
set_end(clip_id, t) |
Set end time |
set_duration(clip_id, t) |
Set duration |
Compositing & Arrangement (6 tools)
| Tool | Description |
|---|---|
subclip(clip_id, start_time, end_time) |
Cut a portion of a clip |
composite_video_clips(clip_ids, size, bg_color, use_bgclip) |
Overlay/compose clips |
tools_clips_array(clip_ids_rows, bg_color) |
Arrange clips in a grid |
concatenate_video_clips(clip_ids, method, transition) |
Concatenate clips |
composite_audio_clips(clip_ids) |
Mix audio clips |
concatenate_audio_clips(clip_ids) |
Concatenate audio clips |
Built-in Video Effects (32 tools)
| Tool | Description |
|---|---|
vfx_accel_decel |
Accelerate / decelerate playback |
vfx_black_white |
Convert to black and white |
vfx_blink |
Blink on/off |
vfx_crop |
Crop region |
vfx_cross_fade_in / vfx_cross_fade_out |
Cross-fade transitions |
vfx_even_size |
Ensure even pixel dimensions |
vfx_fade_in / vfx_fade_out |
Fade from/to black |
vfx_freeze / vfx_freeze_region |
Freeze frame or region |
vfx_gamma_correction |
Adjust gamma |
vfx_head_blur |
Blur a moving point (math expressions) |
vfx_invert_colors |
Invert colors |
vfx_loop |
Loop a clip |
vfx_lum_contrast |
Luminosity & contrast |
vfx_make_loopable |
Seamless loop with fade |
vfx_margin |
Add border/margin |
vfx_mask_color |
Create mask from color |
vfx_masks_and / vfx_masks_or |
Logical mask operations |
vfx_mirror_x / vfx_mirror_y |
Mirror horizontally/vertically |
vfx_multiply_color |
Color intensity |
vfx_multiply_speed |
Playback speed |
vfx_painting |
Oil painting effect |
vfx_resize |
Resize clip |
vfx_rotate |
Rotate clip |
vfx_scroll |
Scrolling viewport |
vfx_slide_in / vfx_slide_out |
Slide transitions |
vfx_supersample |
Anti-aliasing |
vfx_time_mirror |
Reverse playback |
vfx_time_symmetrize |
Play forward then reverse |
Custom Video Effects (9 tools)
| Tool | Description |
|---|---|
vfx_auto_framing(clip_id, target_aspect_ratio, smoothing) |
Face-tracking vertical crop |
vfx_chroma_key(clip_id, color, threshold, softness) |
Green screen removal |
vfx_clone_grid(clip_id, n_clones) |
Grid of video clones |
vfx_kaleidoscope(clip_id, n_slices, x, y) |
Radial symmetry |
vfx_kaleidoscope_cube(clip_id, kaleidoscope_params, cube_params) |
Combined kaleidoscope + cube |
vfx_matrix(clip_id, speed, density, chars, color, font_size) |
Matrix digital rain |
vfx_quad_mirror(clip_id, x, y) |
Four-quadrant mirror |
vfx_rgb_sync(clip_id, r/g/b_offset, r/g/b_time_offset) |
RGB channel split glitch |
vfx_rotating_cube(clip_id, speed, direction, zoom) |
3D rotating cube |
Audio Effects (7 tools)
| Tool | Description |
|---|---|
afx_audio_delay(clip_id, offset, n_repeats, decay) |
Echo / delay |
afx_audio_fade_in / afx_audio_fade_out |
Fade in/out |
afx_audio_loop(clip_id, n_loops, duration) |
Loop audio |
afx_audio_normalize(clip_id) |
Normalize levels |
afx_multiply_stereo_volume(clip_id, left, right) |
Stereo balance |
afx_multiply_volume(clip_id, factor) |
Volume control |
Analysis & Utility (8 tools)
| Tool | Description |
|---|---|
tools_detect_scenes(clip_id, luminosity_threshold) |
Detect scene boundaries |
tools_detect_highlights(clip_id, threshold) |
Optical-flow highlight detection โ finds high-motion moments |
tools_find_video_period(clip_id, start_time) |
Find video repeating period |
tools_find_audio_period(clip_id) |
Find audio repeating period |
tools_drawing_color_gradient(size, p1, p2, col1, col2, shape, offset) |
Generate gradient image |
tools_drawing_color_split(size, x, y, p1, p2, col1, col2, grad_width) |
Generate color split image |
tools_file_to_subtitles(filename, encoding) |
Parse .srt subtitle file |
tools_check_installation() |
Verify MoviePy + FFmpeg install |
Prompt Templates
Pre-built prompt presets for common creative workflows:
| Prompt | Use Case |
|---|---|
demonstrate_kaleidoscope(clip_id) |
8-slice kaleidoscope for psychedelic visuals |
glitch_effect_preset(clip_id) |
High-energy RGB split glitch for music videos |
matrix_intro_preset(clip_id) |
Matrix digital rain for tech/hacker intros |
auto_framing_for_tiktok(clip_id) |
Vertical 9:16 with face tracking |
rotating_cube_transition(clip_id) |
3D cube scene transition |
slideshow_wizard(images, ...) |
Slideshow with transitions, text overlays, configurable resolution |
title_card_generator(text, ...) |
Title card with solid background and typography |
demonstrate_kaleidoscope_cube(clip_id, ...) |
Combined kaleidoscope + cube demo |
File Upload API
The backend exposes a custom HTTP endpoint for direct file uploads:
curl -X POST http://localhost:8080/upload -F "file=@/path/to/video.mp4"
# โ {"filename": "/app/video.mp4", "size": 12345678}
๐จ Custom Effects
All 9 custom effects are in src/api/custom_fx/ and follow the MoviePy Effect protocol. The module also includes a standalone highlight detection function. Full parameter documentation is in CUSTOM_FX.md.
| Effect | File | What It Does |
|---|---|---|
| Auto Framing | auto_framing.py |
Face-tracking crop for vertical video. Uses Haar Cascades + exponential smoothing for cinematic tracking. Ideal for landscape โ portrait conversion. |
| Matrix Rain | matrix.py |
Falling character overlay with bright leading edge and fading trails. Configurable speed, density, charset, color. |
| Kaleidoscope | kaleidoscope.py |
Radial symmetry โ mirrors a wedge around a center point. Configurable slice count and center. |
| RGB Sync | rgb_sync.py |
Splits RGB channels with independent spatial (px) and temporal (seconds) offsets. Chromatic aberration / glitch aesthetic. |
| Chroma Key | chroma_key.py |
Green screen removal via Euclidean distance masking. Configurable threshold and softness. |
| Clone Grid | clone_grid.py |
Tiles video in auto-calculated grid layout. Supports 2โ64+ clones. |
| Quad Mirror | quad_mirror.py |
Four-quadrant symmetry around a configurable center point. |
| Rotating Cube | rotating_cube.py |
3D cube with video on all faces. Multi-axis rotation, optional quad-mirroring, circular motion paths. |
| KaleidoscopeCube | kaleidoscope_cube.py |
Compound effect: Kaleidoscope โ Rotating Cube with independent config for each stage. |
| Highlight Detection | highlight_detect.py |
Optical-flow (Farneback) analysis on keyframes to detect high-motion moments. Returns timestamped highlights with intensity scores. |
๐ณ Docker
One-Command Deployment
# Build and start everything
docker compose up --build
# Just the web app (connects to MCP backend via StreamableHTTP)
docker compose up shorts-creator
# Just the MCP backend standalone
docker compose up vidmagik-mcp
Services
| Service | Description | Port | Dockerfile |
|---|---|---|---|
shorts-creator |
AI Shorts Creator web app | 3000 |
src/app/Dockerfile |
vidmagik-mcp |
MCP server (HTTP mode) | 8080 |
Dockerfile |
In Docker Compose, the frontend connects to the backend over StreamableHTTP via the MCP_SERVER_URL environment variable (set to http://vidmagik-mcp:8080/mcp). When running locally without Docker, the frontend spawns the backend as a local subprocess over stdio.
Docker Details
Backend image (Dockerfile):
- Base:
python:3.12-slim - System deps:
ffmpeg,imagemagick,libsm6,libxext6,libgl1 - Auto-patches ImageMagick policy for TextClip support
- Package manager:
uv(install + sync from lockfile, pinned to Python 3.13)
Frontend image (src/app/Dockerfile):
- Base:
python:3.12-slim - System deps:
ffmpeg - Pinned to Python 3.13
- Exposes port
3000
Shared volume: ./media:/app/media โ both services share video files.
Environment Variables
| Variable | Default | Description |
|---|---|---|
PYTHONUNBUFFERED |
1 |
Disable output buffering |
NICEGUI_HOST |
127.0.0.1 |
NiceGUI bind address (0.0.0.0 in Docker) |
MCP_SERVER_URL |
โ | StreamableHTTP URL for remote MCP server (Docker only) |
LM_STUDIO_API_BASE |
โ | LM Studio API endpoint |
LLM_MODEL |
โ | Model name (auto-prefixed for LM Studio) |
GEMINI_API_KEY |
โ | Gemini API key (auto-selects model) |
OPENAI_API_KEY |
โ | OpenAI API key (auto-selects model) |
ANTHROPIC_API_KEY |
โ | Anthropic API key (auto-selects model) |
๐งช Testing
Test Suite
| File | Scope |
|---|---|
tests/test_e2e.py |
Backend MCP tools โ end-to-end tests |
tests/test_nicegui_integration.py |
NiceGUI UI component integration tests |
tests/frontend_e2e_test.py |
Full frontend end-to-end tests |
Running Tests
# All tests
uv run pytest
# With coverage
uv run pytest --cov=src --cov-report=term-missing
# Specific module
uv run pytest tests/test_e2e.py -v
Pytest config (pyproject.toml):
[tool.pytest.ini_options]
asyncio_mode = "auto"
๐ฆ Dependencies
Runtime
| Package | Purpose |
|---|---|
| FastMCP โฅ 3.0.0 | MCP server framework |
| MoviePy โฅ 2.2.1 | Video editing engine |
| LiteLLM โฅ 1.40.0 | Universal LLM API client (LM Studio, Gemini, OpenAI, Anthropic, local, etc.) |
| NiceGUI โฅ 2.0.0 | Web UI framework for the Shorts Creator |
| OpenCV (headless) โฅ 4.13.0 | Computer vision โ face detection for auto-framing, optical flow for highlight detection |
| NumExpr โฅ 2.14.1 | Safe math expression evaluation (HeadBlur effect) |
| yt-dlp โฅ 2024.0.0 | Video downloading from 1000+ sites |
Dev
| Package | Purpose |
|---|---|
| pytest โฅ 9.0.2 | Testing framework |
| pytest-asyncio โฅ 1.3.0 | Async test support |
| pytest-cov โฅ 7.0.0 | Coverage reporting |
| httpx โฅ 0.28.1 | HTTP client for testing |
System
- FFmpeg โ Required by MoviePy for all encoding/decoding
- ImageMagick โ Required for text rendering (
TextClip)
๐ Development
Setting Up for Development
# Clone the repo
git clone https://github.com/vizionik25/vidmagikAgent.git
cd vidmagikAgent
# Install all dependencies (including dev tools)
uv sync
Running from Source
# Launch the web app
uv run src/app/main.py
# โ Opens at http://127.0.0.1:3000
Running the MCP Server Standalone
# StreamableHTTP mode (for remote clients, Docker, or browser-based tools)
uv run src/api/main.py --transport http --host 0.0.0.0 --port 8080
# SSE mode (Server-Sent Events)
uv run src/api/main.py --transport sse --host 0.0.0.0 --port 8080
# stdio mode (default โ for subprocess-based MCP clients)
uv run src/api/main.py --transport stdio
Server flags:
| Flag | Default | Description |
|---|---|---|
--transport |
http |
Transport protocol: stdio, sse, or http |
--host |
0.0.0.0 |
Bind address for HTTP/SSE modes |
--port |
8080 |
Port for HTTP/SSE modes |
Running Tests
# All tests
uv run pytest
# With coverage report
uv run pytest --cov=src --cov-report=term-missing
# Specific test file
uv run pytest tests/test_e2e.py -v
Uploading Files to the MCP Server
When the server is running in HTTP mode, you can upload files directly:
curl -X POST http://localhost:8080/upload -F "file=@/path/to/video.mp4"
# โ {"filename": "/app/video.mp4", "size": 12345678}
Adding a New Custom Effect
- Create
src/api/custom_fx/my_effect.py:
from moviepy import Effect
import numpy as np
class MyEffect(Effect):
"""Description of your effect."""
def __init__(self, intensity: float = 1.0):
self.intensity = intensity
def apply(self, clip):
def filter(get_frame, t):
frame = get_frame(t)
# Transform the frame here
return frame
return clip.transform(filter)
- Export from
src/api/custom_fx/__init__.py:
from .my_effect import MyEffect # add your effect here
- Add MCP tool in
src/api/main.py:
from custom_fx import MyEffect # be sure to add your effect to the custom_fx import
# The tool name must start with vfx_ and end with _effect (this is how the frontend knows it's a video effect)
# The tool must return the clip_id of the modified clip
# The MCP Server File is quite large so please keep your orginization clean it makes it easier to find your code
@mcp.tool
def vfx_my_effect(clip_id: str, intensity: float = 1.0) -> str:
"""Description of your custom effect."""
clip = get_clip(clip_id)
return register_clip(clip.with_effects([MyEffect(intensity)]))
๐ License
MIT License โ Copyright ยฉ 2026 vizionik25
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vidmagik_agent-0.1.1.tar.gz.
File metadata
- Download URL: vidmagik_agent-0.1.1.tar.gz
- Upload date:
- Size: 202.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1dad4f68c9d531062171604f2ec96761a07f4262eb8513f8d24ec265f647c54f
|
|
| MD5 |
437754bc9d02b7b09aadb6a936e33541
|
|
| BLAKE2b-256 |
7968e2052def8232f1c5523b3c68232b179a7ea228487c2fc7821838ab79a16d
|
File details
Details for the file vidmagik_agent-0.1.1-py3-none-any.whl.
File metadata
- Download URL: vidmagik_agent-0.1.1-py3-none-any.whl
- Upload date:
- Size: 44.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5dd449ee38fcf9b8bc64d4e56947bdc780d4975612de5a7c566ce63cf743df41
|
|
| MD5 |
2d6bc286edfd21e37c391c889b4559a1
|
|
| BLAKE2b-256 |
544b9c5165864a58fdcf16dd1ec9979bd9d48e736ad8844ec120e3c047c9909b
|