Skip to main content

Modular multimodal pipeline for vision-to-LLM integration

Project description

🧠 ModuMuse

Modular Multimodal Intelligence
Plug any Hugging Face LLM and vision encoder together via a learnable projector.
Ready for zero-shot inference now, with adapter-based fine-tuning on the horizon.

GitHub stars PyPI version License Python version


🚀 Features

  • 🔌 Plug-and-play architecture for combining LLMs and vision encoders
  • 🧠 Supports popular models like Qwen, Mistral, LLaMA, CLIP, XCLIP, SAM
  • 🧪 Zero-shot inference with learnable projector modules
  • 🛠️ Adapter-based fine-tuning (coming soon)
  • 📊 Easy benchmarking and visualization tools

📦 Installation

pip install modu-muse

🧬 Quick Start

from modu_muse import Pipeline

pipe = Pipeline(
    llm_name="mistralai/Mistral-7B-Instruct-v0.2",
    vision_name="openai/clip-vit-base-patch16"
)

result = pipe.infer("path/to/image.jpg", "Describe the scene.")
print(result)

🧠 Architecture

[Image/Video] → [Vision Encoder] → [Projector] → [LLM]
  • Vision encoder extracts features
  • Projector maps visual features to LLM-compatible embeddings
  • LLM generates text conditioned on visual context

🛠️ Fine-Tuning (Coming Soon)

Train your own projector using paired image-text datasets:

python train_adapter.py \
  --model llm=Qwen1.5 vision=xclip \
  --dataset_path ./data/relevance_dataset \
  --output_dir ./checkpoints

📁 Project Structure

modu_muse/
├── pipeline.py          # Main multimodal pipeline
├── projector.py         # Vision-to-LLM projector
├── models/
│   ├── llm.py           # LLM loader
│   ├── vision.py        # Vision encoder loader
├── examples/
│   └── quick_start.py   # Demo script

🤝 Contributing

We welcome contributions! Whether it's new model support, training scripts, or documentation improvements—open a PR or start a discussion.


📜 License

This project is licensed under the MIT License.
© 2025 Wissem Elkarous


🌐 Resources


ModuMuse: Where vision meets language.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modu_muse-0.1.5.tar.gz (4.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

modu_muse-0.1.5-py3-none-any.whl (5.2 kB view details)

Uploaded Python 3

File details

Details for the file modu_muse-0.1.5.tar.gz.

File metadata

  • Download URL: modu_muse-0.1.5.tar.gz
  • Upload date:
  • Size: 4.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for modu_muse-0.1.5.tar.gz
Algorithm Hash digest
SHA256 092c5a3863d0b639bbb518da4854d85a196f1e108cbff329d17a0fe4ea27403d
MD5 bfa1db0a14374d4b07fc463f6ff374ce
BLAKE2b-256 bca06d2cc0987f8b69c186423182bc081110945af4a165ebd017fdd35a449860

See more details on using hashes here.

File details

Details for the file modu_muse-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: modu_muse-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 5.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for modu_muse-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 8b3bbfa97d4681eb4b2acf06ebde3785ae66d608a88ef0e4ae088449527250e6
MD5 d33de4d7a1276b6e83656cec04e8b48d
BLAKE2b-256 6f347636f6835321ddf8eaf34d20dc40286862478f895f60d5668e6412ede484

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page