Skip to main content

Track, search, and version every AI image and video prompt. Semantic search, prompt lineage, cross-platform.

Project description

MyGens
The prompt library for AI image and video creators

Install · Quick Start · How It Works · Platforms · Architecture


MyGens demo — semantic search and lineage tree

You create AI images and videos across Midjourney, DALL-E, Stable Diffusion, ComfyUI, Runway, Sora, Kling, and more. Your prompts are everywhere — Discord, ChatGPT threads, browser tabs, random folders. When you finally nail a perfect result, you can't find the exact prompt a week later.

MyGens gives you three things no other tool does:

1. Semantic search — search "moody dark vibes" and find your prompt about "atmospheric shadows, film noir, low-key lighting." Works by meaning, not just keywords.

2. Prompt lineage — see how any idea evolved from first draft to final version, across platforms, as an interactive tree.

3. Word-level diff — compare any two prompt versions and see exactly what changed, which parameters moved, and which version scored better.

Everything runs locally. Your prompts never leave your machine.

pip install mygens

How MyGens compares

%%{init: {'theme': 'dark', 'themeVariables': { 'fontSize': '14px'}}}%%
graph LR
    subgraph TODAY["How creators manage prompts today"]
        D[Discord messages] -.-> L[Lost]
        N[Notion docs] -.-> L
        F[File names] -.-> L
        C[ChatGPT threads] -.-> L
        B[Browser tabs] -.-> L
    end

    subgraph PT["With MyGens"]
        S[Scan / Sync / Save] --> DB[(Local DB)]
        DB --> SE[Semantic Search]
        DB --> LN[Lineage Tree]
        DB --> DF[Prompt Diff]
        DB --> DA[Dashboard]
    end

    style L fill:#8b3a3a,color:#fff
    style DB fill:#8b6914,color:#fff
MyGens DiffusionToolkit Eagle Notion
Cross-platform (MJ + SD + DALL-E + ComfyUI + Sora) Yes SD only Not AI-aware Manual
Semantic search Yes
Prompt lineage tracking Yes
Word-level prompt diff Yes
Video generation support Yes
API sync (Replicate, fal.ai) Yes
Local-first, 100% private Yes Yes Yes No
Cross-OS (macOS, Linux, Windows) Yes Windows only Yes Web
Open source Apache 2.0 MIT $30

Install

pip install mygens

No Docker. No Node.js. No native compilation. Python 3.11+ on any OS.

Quick start

mygens init                    # Create your library

# --- Import from local tools (automatic) ---
mygens scan ~/ComfyUI/output   # Reads workflow JSON from every PNG
mygens scan ~/sd/outputs       # Reads A1111 metadata from every PNG

# --- Import from cloud platforms (automatic) ---
mygens sync replicate          # Imports Flux, Luma, CogVideoX, etc.
mygens sync fal                # Imports from fal.ai endpoints

# --- Save manually (5 seconds) ---
mygens log "cyberpunk city at night" -p sora -m sora-v1

# --- Search by meaning ---
mygens search "moody lighting" # Finds "dark atmospheric shadows, film noir"

# --- See creative evolution ---
mygens tree <id>               # Shows how an idea branched and evolved
mygens diff <id1> <id2>        # Word-level diff between two versions

# --- Web dashboard ---
mygens serve                   # Opens http://localhost:9753

How it works

Capture — get prompts into MyGens

%%{init: {'theme': 'dark'}}%%
flowchart LR
    subgraph LOCAL["Local Generation"]
        SD[Stable Diffusion] --> PNG1[PNG with metadata]
        CU[ComfyUI] --> PNG2[PNG with workflow JSON]
        MJ[Midjourney] --> PNG3[PNG with EXIF]
    end

    subgraph CLOUD["Cloud Platforms"]
        REP[Replicate<br/>Flux · Luma · CogVideoX] --> API1[API sync]
        FAL[fal.ai<br/>Kling · Minimax · Flux] --> API2[API sync]
    end

    subgraph MANUAL["Web-based Tools"]
        SORA[Sora] --> WEB[+ Save prompt]
        RW[Runway] --> WEB
        DALLE[DALL-E] --> WEB
        OTHER[Any tool] --> WEB
    end

    PNG1 --> SCAN["mygens scan"]
    PNG2 --> SCAN
    PNG3 --> SCAN
    API1 --> SYNC["mygens sync"]
    API2 --> SYNC
    WEB --> DASH[Dashboard upload]

    SCAN --> DB[(MyGens DB)]
    SYNC --> DB
    DASH --> DB

    style DB fill:#8b6914,color:#fff

Search — find anything by meaning

%%{init: {'theme': 'dark'}}%%
flowchart LR
    Q["Search: 'gloomy dark vibes'"] --> K[Keyword Search<br/>SQLite FTS5]
    Q --> S[Semantic Search<br/>BGE-small embeddings]

    K --> |"exact matches"| M[Merge + Deduplicate]
    S --> |"meaning matches"| M

    M --> R1["'dark moody atmospheric<br/>shadows, film noir'"]
    M --> R2["'cyberpunk city at night,<br/>neon rain, wet streets'"]
    M --> R3["'cinematic lighting,<br/>volumetric rays'"]

    style Q fill:#8b6914,color:#fff
    style R1 fill:#3a5a3a,color:#fff

How it works under the hood: A 33MB local model (BGE-small, ONNX Runtime) embeds every prompt as a 384-dimensional vector. At search time, your query is embedded and compared against all stored vectors using cosine similarity. Takes <10ms for 10,000 prompts. No GPU. No API calls. No cloud.

Lineage — see how ideas evolve

%%{init: {'theme': 'dark'}}%%
graph TD
    V1["v1: 'cyberpunk city'<br/><small>Midjourney · v7</small>"]

    V1 --> V2["v2: '...neon reflections<br/>on wet streets'<br/><small>Midjourney · variation</small>"]
    V1 --> V8["v8: '...aerial drone shot'<br/><small>Midjourney · variation</small>"]

    V2 --> V3["⭐ v3: '...cinematic,<br/>anamorphic lens'<br/><small>Midjourney · hero shot · ★★★★★</small>"]
    V2 --> V7["v7: '...rain, puddles,<br/>35mm film grain'<br/><small>Midjourney · variation</small>"]

    V3 --> V4["v4: translated to<br/>Stable Diffusion<br/><small>dreamshaperXL · translate</small>"]
    V3 --> V6["v6: translated to<br/>DALL-E for client<br/><small>gpt-4o · translate</small>"]

    V4 --> V5["v5: translated to<br/>ComfyUI + ControlNet<br/><small>dreamshaperXL · translate</small>"]

    style V3 fill:#8b6914,color:#fff,stroke:#c4a44e,stroke-width:2px
    style V1 fill:#3a4a6a,color:#fff
    style V8 fill:#3a4a6a,color:#fff
    style V4 fill:#6a4a3a,color:#fff
    style V5 fill:#6a4a3a,color:#fff
    style V6 fill:#3a5a3a,color:#fff

One idea. Four platforms. Eight iterations. Two branches. The lineage tree shows the complete creative evolution — which path led to the 5-star hero shot, where you explored a different direction, and how the prompt changed when translated across tools.

Diff — see exactly what changed

 Generation v1 → v3

 PROMPT:
   cyberpunk city [-at dusk-] {+at night+}, neon reflections on wet
   [-streets-] {+streets, cinematic, anamorphic lens, rain+}

 PARAMETERS:
   seed:     1001 → 3003
   stylize:  500 → 750  [+50%]
   ar:       1:1 → 21:9 [CHANGED]

 RATING: 0 → 5  [+5 ★]

Platforms

Platform How MyGens captures it Setup
Stable Diffusion (A1111, Forge) Reads prompt, seed, model, sampler, CFG from PNG tEXt chunks mygens scan <folder>
ComfyUI Walks the workflow node graph — extracts KSampler, CLIPTextEncode, CheckpointLoader mygens scan <folder>
Midjourney Reads EXIF Description, parses --ar, --stylize, --v params mygens scan <folder>
Replicate (Flux, Luma Ray, CogVideoX, LTX-Video, Minimax) Syncs predictions via API, stores output URLs (no download) mygens sync replicate
fal.ai (Flux, Kling, Minimax, Luma, SVD) Syncs request history via API, stores output URLs mygens sync fal
Sora / Runway / Kling / Pika / Veo Paste prompt + drop video in dashboard + Save prompt button
DALL-E / ChatGPT Paste prompt + drop image in dashboard + Save prompt button
Any tool Webhook or CLI POST /api/v1/webhooks/generic

Dashboard

mygens serve   # → http://localhost:9753

Gallery view with prompts grouped by platform (list or grid). Click any prompt → detail panel slides in. View lineage. Compare versions. Rate, tag, search. Dark and light themes. Manual entry with drag-and-drop for images and videos.

Architecture

%%{init: {'theme': 'dark'}}%%
graph TB
    subgraph UI["User Interfaces"]
        CLI[CLI<br/><small>13 commands · Typer + Rich</small>]
        WEB[Web Dashboard<br/><small>React · Vite · TailwindCSS</small>]
    end

    subgraph API["REST API"]
        FA[FastAPI<br/><small>16+ endpoints · Pydantic v2</small>]
    end

    subgraph CORE["Core Engine"]
        GEN[Generation CRUD]
        SEARCH[Hybrid Search<br/><small>FTS5 + BGE-small embeddings</small>]
        LIN[Lineage DAG<br/><small>Recursive CTEs</small>]
        DIFF[Prompt Diff<br/><small>Word-level</small>]
        FRAG[Fragment Analysis]
    end

    subgraph CAPTURE["Capture Layer"]
        PARSE[Parsers<br/><small>A1111 · ComfyUI · Midjourney</small>]
        INT[Integrations<br/><small>Replicate · fal.ai · Webhooks</small>]
    end

    subgraph STORAGE["Storage"]
        DB[(SQLite + FTS5<br/><small>+ embedding vectors</small>)]
    end

    CLI --> GEN & SEARCH & LIN & DIFF
    WEB --> FA --> GEN & SEARCH & LIN & DIFF
    GEN & SEARCH & LIN & DIFF & FRAG --> DB
    PARSE --> GEN
    INT --> GEN

    style DB fill:#8b6914,color:#fff

Deep dive: ARCHITECTURE.md — data model, search internals, parser system, every design decision with tradeoffs.

Tech stack

Choice Why
Language Python 3.11+ Zero-friction install. AI creators already have it.
Database SQLite + FTS5 Built-in. Zero config. Single portable file.
Search BGE-small (ONNX Runtime) 33MB. CPU-only. <10ms. No PyTorch (2GB).
CLI Typer + Rich Beautiful output. Typed.
API FastAPI Async. Auto-generated OpenAPI docs.
Dashboard React + Vite + TailwindCSS Bundled in the Python package.
DAG viz React Flow + dagre Interactive lineage graphs.

CLI reference

mygens init                    Initialize library
mygens scan <path>             Import images (auto-detects platform)
mygens sync <platform>         Import from Replicate or fal.ai
mygens log <prompt>            Log a prompt manually
mygens search <query>          Semantic + keyword search
mygens list [--json]           List with filters
mygens diff <id1> <id2>        Word-level prompt diff
mygens tree <id>               Lineage tree visualization
mygens link <child> <parent>   Connect prompts in lineage
mygens rate <id> <1-5>         Rate a prompt
mygens tag <id> -t "a,b"       Add tags
mygens stats                   Library statistics
mygens serve                   Start web dashboard

REST API

Available at localhost:9753 when running mygens serve:

POST   /api/v1/generations/upload     Create with image/video upload
GET    /api/v1/generations/           List with filters
GET    /api/v1/generations/{id}       Single prompt + outputs
GET    /api/v1/generations/{id}/tree  Lineage DAG
GET    /api/v1/generations/{id}/diff/{other}  Prompt diff
GET    /api/v1/search?q=...          Hybrid search
POST   /api/v1/webhooks/replicate    Replicate webhook
POST   /api/v1/webhooks/generic      Generic webhook

Full OpenAPI docs: http://localhost:9753/docs

Extend it

Add a parser for a new platform in ~30 lines:

from mygens.parsers.base import Parser, ParsedGeneration

class MyParser:
    name = "My Platform"
    platforms = ["my_platform"]

    def detect(self, buffer: bytes) -> bool: ...
    def parse(self, buffer: bytes, file_path: str) -> list[ParsedGeneration]: ...

See a1111.py for a complete example.

Privacy

  • 100% local. Prompts, images, videos, embeddings — everything stays on your machine.
  • Zero telemetry. No analytics. No tracking. No phone-home.
  • Portable. One folder (~/.mygens/), one SQLite file. Copy it anywhere.

Development

git clone https://github.com/chopratejas/mygens && cd mygens
uv venv && uv pip install -e ".[dev]"
pytest tests/                       # 138 tests, ~15s, zero mocks

License

Apache 2.0

Contributing

The highest-impact contributions right now:

  1. Parsers — Flux, Pika, NanoBanana Pro, any tool that embeds metadata
  2. Browser extension — auto-capture from Sora, ChatGPT, Runway, Midjourney web
  3. API integrations — RunwayML, Luma, any platform with generation history
  4. Dashboard — the UI is functional but early

See ARCHITECTURE.md for the full codebase guide.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mygens_ai-0.1.0.tar.gz (227.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mygens_ai-0.1.0-py3-none-any.whl (249.9 kB view details)

Uploaded Python 3

File details

Details for the file mygens_ai-0.1.0.tar.gz.

File metadata

  • Download URL: mygens_ai-0.1.0.tar.gz
  • Upload date:
  • Size: 227.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for mygens_ai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c597b9f6af5f50e31d0f038091c3258270a3784c31692fe23f22d5488550eb2b
MD5 ca310166a15148f4eab548c978a4c562
BLAKE2b-256 8a9b8e6458a31dd00d295b8f2b434d6b0fbfd82676a156546180730a47703983

See more details on using hashes here.

File details

Details for the file mygens_ai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: mygens_ai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 249.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for mygens_ai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ae4c27ecbc77f2c2764ec3d33aa343c6e6d703faebb2d10abc72cc2b7d71f386
MD5 38f2e34528b8c5584fd5ce013795a4ca
BLAKE2b-256 7b76c5e7a5ab69b6a51b60561a738445e36ccf81eb84223fdad99e1c7e46a3b3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page