Build llama.cpp mmproj.gguf files for any LLM via lstsq, no training.
Project description
globalmm
Give any language model vision, without training.
globalmm builds an mmproj.gguf file that plugs into llama.cpp and lets any
local LLM accept images. The projector inside the mmproj is a single 1152xD
matrix, fit in seconds via closed-form least squares. No gradient descent, no
hours of GPU time, no paired image-caption dataset.
What you get
flowchart LR
img[image.jpg] --> siglip[SigLIP SO400M<br/>vision tower]
siglip --> patches[81 patch vectors<br/>1152-dim each]
patches --> W[W<br/>1152 x d_llm]
W --> soft[81 soft tokens<br/>in LLM embedding space]
soft --> llm[any causal LLM<br/>Qwen, Llama, Mistral, ...]
llm --> text[text response]
Everything to the left of W is frozen SigLIP. Everything to the right of W
is your frozen LLM. W is the only thing globalmm computes, and it is a
single matrix multiplication at runtime.
Quick start
Install with uv:
uv tool install globalmm
Or run it once without installing:
uvx globalmm build --llm ... --concepts ... --out ...
You need two things to build an mmproj:
- A target LLM (any Hugging Face causal LM with a standard embedding table).
- A list of concept words that describe the visual domain you care about.
One word per line, plain text. An example covering everyday COCO-style
objects lives in
data/concepts.txt.
Then:
globalmm build \
--llm Qwen/Qwen2.5-1.5B-Instruct \
--concepts data/concepts.txt \
--out qwen.gguf
First run takes a few minutes because it downloads COCO val2017 (about 800 MB)
to ./.globalmm/images/ and runs a one-time SigLIP encoding pass. Later runs
reuse the cache and finish in about thirty seconds.
To use your own images instead of COCO, point --images at any folder of
JPEGs or PNGs.
Running inference with llama.cpp
Once you have qwen.gguf (the mmproj) and an existing qwen2.5-1.5b.gguf
(the regular LLM weights), llama.cpp handles the rest:
llama-mtmd-cli \
-m qwen2.5-1.5b.gguf \
--mmproj qwen.gguf \
--image cat.jpg \
-p "Describe what you see."
Or through the OpenAI-compatible server:
llama-server -m qwen2.5-1.5b.gguf --mmproj qwen.gguf --port 8080
No Python at inference time. No transformers. No GPU required.
How it works
The core idea is that SigLIP and any causal LLM both speak dense vectors, just
in different spaces. SigLIP encodes an image into 81 patch vectors of dimension
1152. An LLM expects token embeddings of its own hidden size. A linear map W
with shape (1152, d_llm) is enough to bridge the two, provided we can produce
paired samples to fit it against.
flowchart TB
subgraph build [globalmm build]
concepts[concepts.txt] --> stext[SigLIP text encoder]
stext --> csig[concept vectors<br/>in SigLIP space]
concepts --> tok[LLM tokenizer + embed table]
tok --> cllm[concept vectors<br/>in LLM space]
imgs[image folder] --> svis[SigLIP vision tower]
svis --> feats[per-image features]
feats --> label[top-3 similarity<br/>against csig]
label --> blend[linear blend of cllm<br/>= per-image target Y]
feats --> X[per-image mean-patch X]
X --> lstsq[W = lstsq X Y]
blend --> lstsq
lstsq --> wmat[W matrix]
wmat --> pack[pack into mmproj.gguf<br/>alongside SigLIP weights]
end
Step by step:
- Encode each concept word twice. Once through SigLIP's text encoder, which lands in the same contrastive space as SigLIP's image features. Once through the target LLM's embedding table, which lands in the LLM's native token space.
- For each image in the cache, take the SigLIP image vector and compute cosine similarity against every concept in SigLIP space. Pick the top three.
- Blend the corresponding LLM embeddings with weights proportional to those
similarities. This is the image's target
Y. - Take the per-image mean of SigLIP's 81 patch vectors as the input
X. - Cap the number of images per primary concept at fifty so the COCO
distribution does not dominate
W. - Solve
W = lstsq(X, Y). The whole step takes under a second on CPU.
At inference the mmproj embeds SigLIP's weights plus W in a single .gguf
file. llama.cpp loads it through the gemma3 projector path, runs SigLIP on the
input image, multiplies the 81 patch vectors by W, and splices the result
into the prompt wherever the image token sits.
Why this works at all
SigLIP is contrastively trained so that semantically similar images and texts
live near each other in a shared space. The top-k concepts for an image are
therefore a fuzzy but meaningful label. Blending the LLM embeddings of those
labels gives a per-image target that sits roughly where the LLM expects to see
the word describing the image. Fitting W against thousands of these pairs
finds one linear map that generalizes across the concept list. Because the
fit uses mean-patch features, the same W applied to individual patches at
inference produces 81 soft tokens that each nudge the LLM in the direction of
the image content.
This is a far cry from a trained multimodal model. It is closer to a clever lookup table that borrows SigLIP's alignment properties. The upside is that building a projector for a new LLM takes seconds instead of GPU-days.
Limitations
- Per-LLM.
Wis tied to a specific LLM's embedding table. Swapping LLMs means rebuilding the mmproj. The good news is that the rebuild is fast and the CLI handles it with one command. - Concept list matters.
globalmmcan only describe things that appear in the concept list. If you care about medical scans, put medical terms inconcepts.txt. If you care about car parts, put car parts. The default example file covers everyday objects only. - Tokenizer BPE artifacts. Words that split into multiple subword tokens
such as
giraffe(intogirandaffe) are harder to recover. They end up as averaged fragments and the LLM may or may not put them back together. - Gemma3 projector only. The mmproj uses the
clip.projector_type=gemma3metadata key because that is the only linear single-matrix projector llama.cpp ships. Any LLM that llama.cpp supports will work, but the target LLM's hidden size has to match theprojection_dimin the mmproj, which is why the projector is per-LLM. - SigLIP is frozen. If SigLIP fails to see something in the image, no projector can recover it. This is not a replacement for proper multimodal training if you need state-of-the-art quality.
API
from globalmm.projector import compute_W
from globalmm.build_mmproj import build_mmproj
W = compute_W(
llm_name="Qwen/Qwen2.5-1.5B-Instruct",
concepts_path="data/concepts.txt",
)
build_mmproj(W, "qwen.gguf")
Same result as the CLI, useful for scripting or embedding in a larger pipeline.
References
The approach borrows from a few papers and projects:
- Zhai et al., Sigmoid Loss for Language Image Pre-Training, ICCV 2023. arxiv.org/abs/2303.15343. SigLIP is the frozen vision backbone. Its contrastive alignment between image and text space is what makes zero-shot concept labeling work.
- Moschella et al., Relative Representations Enable Zero-Shot Latent Space Communication, ICLR 2023. arxiv.org/abs/2209.15430. The broader idea that two frozen embedding spaces can be linked via a fixed set of anchor points without joint training.
- Smith et al., Offline Bilingual Word Vectors, Orthogonal Transformations and the Inverted Softmax, ICLR 2017. arxiv.org/abs/1702.03859. Closed-form linear alignment between embedding spaces, the mathematical ancestor of the single-matrix projector used here.
- Liu et al., Visual Instruction Tuning (LLaVA), NeurIPS 2023. arxiv.org/abs/2304.08485. The trained linear projector baseline that globalmm replaces with closed-form lstsq.
- ggml-org/llama.cpp. The runtime
that loads the mmproj and runs SigLIP plus W plus the LLM in a single
process. The gemma3 projector type in
clip.cppis the specific format globalmm writes into.
License
MIT. See LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file globalmm-0.1.0.tar.gz.
File metadata
- Download URL: globalmm-0.1.0.tar.gz
- Upload date:
- Size: 14.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68639f196d9c0eb047292e3bfd3d8297e1a8978a7308ec3465d70c994ed2084b
|
|
| MD5 |
3a46a7d0dfabc229f2c2555dab919156
|
|
| BLAKE2b-256 |
7f20b52421aeaa9fca222a517b3707c9ee229766a0fde2c74c42f040615fec8b
|
File details
Details for the file globalmm-0.1.0-py3-none-any.whl.
File metadata
- Download URL: globalmm-0.1.0-py3-none-any.whl
- Upload date:
- Size: 16.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a791bcb3b022964fa03083ed808573ac330347e122762ba43fefea5b3b60b82a
|
|
| MD5 |
b4490846c448e1fbf1bca3514bc57264
|
|
| BLAKE2b-256 |
e8f5e8a0d433b192cea132cc672e62c7acc64f83ce8536a3bc4d7784872bfc51
|