Skip to main content

Modular multimodal pipeline for vision-to-LLM integration

Project description

🧠 ModuMuse

Modular Multimodal Intelligence
Plug any Hugging Face LLM and vision encoder together via a learnable projector.
Supports zero-shot inference today, and adapter-based fine-tuning tomorrow.

GitHub stars PyPI version License Python version


🚀 Features

  • 🔌 Plug-and-play architecture for combining LLMs and vision encoders
  • 🧠 Supports popular models like Qwen, Mistral, LLaMA, CLIP, XCLIP, SAM
  • 🧪 Zero-shot inference with learnable projector modules
  • 🛠️ Adapter-based fine-tuning (coming soon)
  • 📊 Easy benchmarking and visualization tools

📦 Installation

pip install modu-muse

🧬 Quick Start

from modu_muse import Pipeline

pipe = Pipeline(
    llm_name="mistralai/Mistral-7B-Instruct-v0.2",
    vision_name="openai/clip-vit-base-patch16"
)

result = pipe.infer("path/to/image.jpg", "Describe the scene.")
print(result)

🧠 Architecture

[Image/Video] → [Vision Encoder] → [Projector] → [LLM]
  • Vision encoder extracts features
  • Projector maps visual features to LLM-compatible embeddings
  • LLM generates text conditioned on visual context

🛠️ Fine-Tuning (Coming Soon)

Train your own projector using paired image-text datasets:

python train_adapter.py \
  --model llm=Qwen1.5 vision=xclip \
  --dataset_path ./data/relevance_dataset \
  --output_dir ./checkpoints

📁 Project Structure

modu_muse/
├── pipeline.py          # Main multimodal pipeline
├── projector.py         # Vision-to-LLM projector
├── models/
│   ├── llm.py           # LLM loader
│   ├── vision.py        # Vision encoder loader
├── examples/
│   └── quick_start.py   # Demo script

🤝 Contributing

We welcome contributions! Whether it's new model support, training scripts, or documentation improvements—open a PR or start a discussion.


📜 License

This project is licensed under the MIT License.
© 2025 Wissem Elkarous


🌐 Resources


ModuMuse: Where vision meets language.

```

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modu_muse-0.1.4.tar.gz (4.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

modu_muse-0.1.4-py3-none-any.whl (5.2 kB view details)

Uploaded Python 3

File details

Details for the file modu_muse-0.1.4.tar.gz.

File metadata

  • Download URL: modu_muse-0.1.4.tar.gz
  • Upload date:
  • Size: 4.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for modu_muse-0.1.4.tar.gz
Algorithm Hash digest
SHA256 1216d098baa532ee2e502cfa564ab108a8eb1b1490904d5f42d3764dd90407f7
MD5 1396f9f50c91fec683bf6ee4c0a4098f
BLAKE2b-256 191d457c27d79a33da42ee975e972afb5a83859c80cfc2d40119015dad5f539c

See more details on using hashes here.

File details

Details for the file modu_muse-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: modu_muse-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 5.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for modu_muse-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 8d7ef7ad15a44107a58b2f79875ea644249dbffd3b736f29f794400e2123cb52
MD5 33118a8de5ac3573a4a28fbd56195f8a
BLAKE2b-256 9f4b9e79256124be7e5feb5b49be6b29cf1a48c1765fcd6f0382b7555570ae0e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page