Open magic for your media - an AI video-effects tool
Project description
OpenEffect is what Higgsfield, Pollo, and Runway-style "click one button, get a cinematic clip" tools look like when they're open-source, BYOK, and built around portable effect manifests you own. Pick an effect (atmosphere, camera moves, transforms), drop in a photo, and a short video comes back. Every effect is a YAML file you can fork, tweak, share, and ship to the community catalog.
🚧 Early days - breaking changes possible until version 1.
Quickstart
uvx
uvx openeffect
No install - uv pulls the package into an ephemeral env and runs it.
pip
pip install openeffect
openeffect
Docker Compose
curl -O https://raw.githubusercontent.com/openeffect/openeffect/main/docker-compose.yml
docker compose up
Whichever path you pick, the browser opens at http://localhost:3131. Paste a fal.ai key into Settings (or set FAL_KEY as an environment variable). Then pick an effect, drag in a photo, and hit Generate.
Why BYOK?
OpenEffect is bring your own key - the app runs on your machine, but generation itself happens on whichever cloud provider you point it at. Two reasons:
- You can't realistically self-host video models. State-of-the-art video diffusion needs 24-80 GB of GPU memory and minutes per clip. Closed models like Kling can't be self-hosted at all. The good open ones (Wan, etc.) need a beefy GPU most people don't have at home.
- It's cheap to try. fal.ai charges roughly $0.05-$0.50 per video depending on model and resolution. A handful of dollars buys hundreds of test runs - far less than any "AI video" subscription, with no monthly commitment and no auto-renewal.
We picked fal.ai as the first provider because they expose the broadest open-vs-closed model selection behind one unified API. We're not affiliated with fal.ai - it was just the cleanest plug-in option. More providers (Replicate, Hugging Face Inference, your own GPU server) are on the roadmap. The app's storage stays local - your fal.ai key, your run history, and your generated videos all live in ~/.openeffect/ and never touch our servers, because there are no servers.
Build effects together
The two examples up top are seed content - a starter set of effects (atmosphere, camera moves, transforms) ships in the box, all stored as YAML manifests under effects/. The point of this project isn't to ship a fixed list - it's to grow an open library of cinematic effects, in the open.
- Author an effect with the in-app editor - click
+in the header and you get a blank manifest plus an asset uploader. - Export any effect as a
.ziparchive (Effect → ⋯ → Export) - manifest, prompt, asset references, all portable. - Install an effect with one click - drop a
.zipinto the Install effect dialog, or paste a URL to a remotemanifest.yaml. - Share what you make. Publish on your own GitHub and post the install URL anywhere - Discord, gists, wherever your audience is.
The long-term plan is a separate community-catalog repo where anyone can browse, install, and remix without ever leaving the app - coming soon. If you've made something cool in the meantime, open an issue and we'll feature it.
Features
- 🎬 Curated effect library - atmosphere, camera moves, transforms, more landing as the catalog grows
- 🧠 Multi-model, multi-provider - Kling, PixVerse, Wan; more models and providers landing as the catalog grows
- 🔑 BYOK - bring your own fal.ai key, pay only for what you generate
- 📁 Local-first storage - your runs, effects, and config live in
~/.openeffect/ - 📜 Run history - every generation saved with its inputs and the resulting video
- 🧪 Playground mode for quick prompt experiments without authoring a manifest
- ✏️ In-app YAML editor for forking, customizing, and authoring new effects
- 📦 Export / import effects as portable
.ziparchives - 🌗 Light / dark / system theme
- ⚡ Easy to install and run - one command via
uvx,pip, or Docker Compose
Writing your own effect
The fastest path is the in-app editor - click + in the header, fill in the manifest, and upload a sample preview and an input image. Effects are plain YAML - here's the full bundled Eyes Glow manifest (more under effects/):
manifest_version: 1
id: openeffect/eyes-glow
name: Eyes Glow
description: >
A subtle luminous spark appears in the subject's eyes,
creating a cinematic mystical look while preserving realism and identity.
version: "0.1.0"
author: OpenEffect
category: atmosphere
tags:
- eyes
- glow
- mystical
- subtle
showcases:
- preview: preview.mp4
inputs:
image: input-image.jpg
inputs:
image:
type: image
role: start_frame
required: true
label: "Your photo"
hint: "Works best with portraits or close-ups where the eyes are clearly visible"
generation:
models:
- kling-3.0
default_model: kling-3.0
prompt: >
A single continuous shot of the same subject from the input image.
A subtle luminous spark gradually appears in the eyes,
creating a soft cinematic eye shimmer.
Preserve the same identity, face, hairstyle, clothing, pose, and background.
The glow stays localized inside the irises and pupils,
with only a faint natural reflection nearby.
The subject remains planted in the same scene.
Only subtle natural micro-motion is allowed.
No cut, no scene replacement, no duplicate subject.
negative_prompt: >
cut, scene replacement, duplicate subject, extra people, warped face,
deformed anatomy, full face glow, entire body glow, random neon streaks,
laser beams, energy explosion, fire from eyes, destructive energy,
weapon-like glow, heavy bloom, extreme overexposure, scary horror eyes,
smoke, blur, heavy jitter, text, watermark
model_overrides:
kling-3.0:
prompt: >
Close-up or medium close-up of the same subject from the input image.
A soft cinematic spark slowly appears in the eyes in one continuous shot.
The eyes gain a gentle luminous shimmer inside the irises and pupils,
with very light reflection around the eyelids only.
The glow should feel elegant, calm, and slightly magical,
not scary, not aggressive, and not like light is being emitted outward.
The same subject remains planted and recognizable throughout.
Only slight blink, breathing, or micro-expression is allowed.
Clean continuity, no cut, no scene replacement, no laser beams.
params:
duration: 4
generate_audio: false
guidance_scale: 0.52
Configuration
- fal.ai API key - Settings → paste key, or
FAL_KEY=sk-...env var - Data directory -
~/.openeffect/(override withOPENEFFECT_USER_DATA_DIR) - Server port -
3131by default (override withOPENEFFECT_PORT) - Skip browser open -
OPENEFFECT_NO_BROWSER=true
Develop
git clone https://github.com/openeffect/openeffect && cd openeffect
make install
make test
make lint
make dev # backend on :3131, Vite frontend on :5173
The frontend is React + TypeScript + Tailwind in client/. The backend is FastAPI + SQLite in server/. Effects YAML lives in effects/.
Roadmap
The aim is to grow OpenEffect into an open, portable effect studio - everything Higgsfield-style products do, but local-first and BYOK.
Near-term
- More effects. Aging, weather (rain/snow), aesthetic filters, time-of-day shifts, audio-reactive params.
- Transitions. Between two photos - match-cuts, dissolves, morphs.
- Motion control. Camera-path templates, subject-locked motion, controllable speed and arc.
- Public effect index. Browse and one-click install community-authored effects without leaving the app.
- More providers. Replicate, Hugging Face Inference, custom HTTP endpoints - the fal.ai dependency goes from "the only choice" to "one of many."
Studio direction
- Reference characters. Reusable subject identities you can drop into any effect and keep consistent across runs.
- Reference voices. Reusable voice presets for narration, dialog, and lip-sync work.
- Lip sync. Drive a character's mouth from an audio clip or text.
- Local inference. For open models that fit on consumer hardware - if there's enough demand to justify the engineering.
Have an idea? Open an issue - early-days roadmaps benefit most from people picking what they actually want.
Acknowledgements
OpenEffect is glue. The heavy lifting belongs to the model authors:
...and to fal.ai for the unified inference API. Not affiliated with any of the above - they're independent products that happen to make this one possible.
License
MIT - see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openeffect-0.1.0.tar.gz.
File metadata
- Download URL: openeffect-0.1.0.tar.gz
- Upload date:
- Size: 9.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
26339bb5b4a6254cccc2b85219f18743ffe6fdd7177bee44de0ec767e096d03b
|
|
| MD5 |
32c0686f032a2a5f54a84e570975b3d8
|
|
| BLAKE2b-256 |
32e7b5a8388a068dbc6b33c65cb6c5c6df97c563431035a6160e9b3927c0206a
|
Provenance
The following attestation bundles were made for openeffect-0.1.0.tar.gz:
Publisher:
publish-pypi-package.yml on openeffect/openeffect
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openeffect-0.1.0.tar.gz -
Subject digest:
26339bb5b4a6254cccc2b85219f18743ffe6fdd7177bee44de0ec767e096d03b - Sigstore transparency entry: 1405301143
- Sigstore integration time:
-
Permalink:
openeffect/openeffect@e27b65294ba0386b50bb446bf40acff72e112575 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/openeffect
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi-package.yml@e27b65294ba0386b50bb446bf40acff72e112575 -
Trigger Event:
release
-
Statement type:
File details
Details for the file openeffect-0.1.0-py3-none-any.whl.
File metadata
- Download URL: openeffect-0.1.0-py3-none-any.whl
- Upload date:
- Size: 2.9 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
19fe5c36bd3d624716fa7ad8fdc1d71a5397aa51f4ae1b221bf99b44ca166a99
|
|
| MD5 |
2016651dd1d0bf12f09e76da0bddda5f
|
|
| BLAKE2b-256 |
921910ba6a5ed0aaf1d58f5bc604a7b1352cc1c02d0fa7c424787736dd028343
|
Provenance
The following attestation bundles were made for openeffect-0.1.0-py3-none-any.whl:
Publisher:
publish-pypi-package.yml on openeffect/openeffect
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
openeffect-0.1.0-py3-none-any.whl -
Subject digest:
19fe5c36bd3d624716fa7ad8fdc1d71a5397aa51f4ae1b221bf99b44ca166a99 - Sigstore transparency entry: 1405301192
- Sigstore integration time:
-
Permalink:
openeffect/openeffect@e27b65294ba0386b50bb446bf40acff72e112575 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/openeffect
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi-package.yml@e27b65294ba0386b50bb446bf40acff72e112575 -
Trigger Event:
release
-
Statement type: