Vectalab - Professional High-Fidelity Image Vectorization
Project description
Vectalab
Professional High-Fidelity Image Vectorization
Convert raster images (PNG, JPG) to optimized SVG with 97%+ quality and 70–80% file size reduction.
Installation
pip install vectalab
# Optional: install SVGO (Node.js) for best compression
# recommended: Node 16+ or current LTS
npm install -g svgo
Quick Start
# Vectorize an image (recommended)
vectalab premium logo.png
# Optimize existing SVG
vectalab optimize icon.svg
# Check SVGO status
vectalab svgo-info
Results
| Metric | Value |
|---|---|
| Quality (SSIM) | 97-99% |
| File reduction | 70-80% |
| Color accuracy (ΔE) | < 1 (imperceptible) |
| Processing time | 0.2-2s |
Commands
| Command | Description |
|---|---|
premium |
⭐ SOTA vectorization (recommended) |
optimize |
Compress existing SVG with SVGO |
convert |
Basic vectorization |
logo |
Logo-optimized conversion |
info |
Analyze image |
svgo-info |
Check SVGO status |
benchmark |
📊 Run performance benchmarks |
Usage
CLI
# Best quality + smallest file
vectalab premium image.png
# Maximum compression
vectalab premium logo.png --precision 1 --mode logo
# Photo vectorization
vectalab premium photo.jpg --mode photo --colors 32
# Compress existing SVG
vectalab optimize icon.svg
Benchmarking
Run comprehensive benchmarks on your own images to evaluate quality and performance.
# Run the Python benchmark runner (reproducible & auditable)
python scripts/benchmark_runner.py --input-dir ./my_images --mode premium
# Run targeted 80/20 optimization checks
python scripts/benchmark_80_20.py examples/test_logo.png
# Run the Golden Dataset using the runner
python scripts/benchmark_runner.py --input-dir golden_data --mode premium
Python
from vectalab import vectorize_premium
svg_path, metrics = vectorize_premium("input.png", "output.svg")
print(f"Quality: {metrics['ssim']*100:.1f}%")
print(f"Size: {metrics['file_size']/1024:.1f} KB")
print(f"Color accuracy: ΔE={metrics['delta_e']:.2f}")
Options
| Flag | Default | Description |
|---|---|---|
--precision, -p |
2 | Coordinate decimals (1=smallest) |
--mode, -m |
auto | logo, photo, or auto |
--colors, -c |
auto | Palette size (4-64) |
--svgo/--no-svgo |
on | SVGO optimization |
Cloud Acceleration (Modal)
Vectalab supports offloading heavy segmentation tasks (SAM) to the cloud using Modal.com. This enables using the largest models (vit_h) on any machine.
- Setup:
modal setup - Run:
vectalab convert input.png --method sam --use-modal
See Modal Setup Guide for details.
Documentation
- CLI Reference - Complete command guide
- Python API - Programmatic usage
- Examples - Common workflows
- Algorithm - Technical details
- Benchmarks & Protocol - Reproducible benchmarking and scripts
- Cloud Setup - Modal integration guide
- Model Weights & Download Instructions - where to get large model files and how to place them in the repo
Scripts cleanup
Some older, ad-hoc testing/analysis scripts were moved into scripts/archived/ to keep the main scripts/ directory concise. See scripts/README.md for details on which tools live in scripts/ vs. scripts/archived/.
Architecture
PNG/JPG → Analysis → Preprocessing → vtracer → SVGO → SVG
↓ ↓ ↓ ↓
Type detect Color quant Tracing Compress
(logo/photo) Edge-aware (Rust) (30-50%)
Requirements
- Python 3.10–3.12 (see pyproject.toml; the package requires >=3.10)
- Node.js (for SVGO, optional but recommended; use an LTS release)
Core Dependencies
vtracer # Rust vectorization engine (primary tracing backend)
opencv-python # Image processing
scikit-image # Quality & image metrics
cairosvg # SVG rendering (used in tests and helpers)
Optional/advanced features (SAM segmentation, Modal cloud acceleration):
segment-anything # SAM-based segmentation (optional)
modal # cloud acceleration (optional — see docs/modal_setup.md)
torch/torchvision # hardware-accelerated segmentation models
License
MIT License - see LICENSE
Credits
Publishing / Releases 🔧
We include a tiny helper script to build and upload releases to PyPI or TestPyPI: scripts/publish_to_pypi.py.
Quick usage:
# Install the tools used by the script
python -m pip install --upgrade build twine
# Dry-run to TestPyPI (default is testpypi)
python scripts/publish_to_pypi.py --dry-run
# Upload to TestPyPI (use env TWINE_USERNAME/TWINE_PASSWORD or ~/.pypirc)
python scripts/publish_to_pypi.py --repository testpypi
# Upload to production PyPI
python scripts/publish_to_pypi.py --repository pypi
# Build, upload to PyPI and tag the current version (reads pyproject.toml)
python scripts/publish_to_pypi.py --repository pypi --tag
# If you want to inspect only the build artifacts and skip upload
python scripts/publish_to_pypi.py --no-upload
Notes & recommendations:
- The script expects build artifacts in
dist/and will runpython -m buildby default. - Use
--dry-runto preview commands to be executed before actually uploading. - For CI, set
TWINE_USERNAMEandTWINE_PASSWORDas environment secrets, or configure~/.pypircsotwinecan use that. - The script supports both TestPyPI (
--repository testpypi) and production PyPI (--repository pypi). - You can also target a custom PyPI-compatible endpoint using
--repository-url(e.g. a private index or an internal upload endpoint). This overrides--repository.
CI publishing (recommended)
To safely publish to PyPI on releases, add a GitHub Actions secret named PYPI_API_TOKEN containing a PyPI API token (create one at https://pypi.org/manage/account/token/). A workflow is included that will run on push tags named like v* and publish built distributions automatically.
Typical workflow:
- Create a PyPI API token (project or account token) on https://pypi.org/account/.
- Add the token to your repository under Settings → Secrets → Actions →
PYPI_API_TOKEN. - Push a git tag (example:
git tag v0.1.0 && git push origin v0.1.0). The CI workflow will build & publish.
Workflow note: older versions of the pypa/gh-action-pypi-publish action required using @release/v1 or a specific @vX.Y.Z tag instead of @release; the workflow in this repo now uses pypa/gh-action-pypi-publish@release/v1 to avoid the "unable to find version 'release'" error.
Trusted Publishing (OIDC) support
This workflow now supports GitHub's OpenID Connect (OIDC) / Trusted Publishing flow in case you prefer not to store a PyPI API token in repository secrets.
What changed: the publishing job has job-level permissions so it can request an OIDC id token from GitHub:
jobs:
publish:
permissions:
id-token: write
contents: read
runs-on: ubuntu-latest
# ...
How to use Trusted Publishing (summary):
- Configure a Trusted Publisher on PyPI and link it to your GitHub repo / org. See PyPI's Trusted Publisher docs (https://pypi.org/help/#trusted-publishers) for setup details.
- Once PyPI trusts your repository/organization, the publishing job will request an OIDC id token and exchange it with PyPI to authenticate — no token stored in GitHub secrets required.
Notes:
- Trusted Publishing is more secure but requires extra PyPI-side steps and verification; if you prefer a simpler setup, create a project-scoped PyPI API token and set it as the
PYPI_API_TOKENsecret for CI. - If you want I can help configure the PyPI side (e.g., add a trusted publisher) or update the workflow to support both modes depending on whether the secret is present.
Repository protections
This repository now has a conservative branch protection policy applied to main to reduce accidental direct pushes and require code review for changes. The policy applied includes:
- Require at least 1 approving PR review.
- Disallow force-pushes and branch deletions on
main. - Do not enforce admin exemptions (admins are not required to follow the rules in this conservative setup).
- No required CI contexts (you can add these later once GitHub Actions workflows exist).
If you prefer to manage branch protection manually, these are the gh commands used (run locally as a repository admin):
# Example: conservative (require 1 review, strict status checks w/ no contexts, disallow force pushes)
cat > /tmp/prot.json <<'JSON'
{
"required_status_checks": { "strict": true, "contexts": [] },
"enforce_admins": false,
"required_pull_request_reviews": {
"dismiss_stale_reviews": true,
"require_code_owner_reviews": false,
"required_approving_review_count": 1
},
"restrictions": null,
"allow_force_pushes": false,
"allow_deletions": false
}
JSON
gh api --method PUT /repos/<ORG_OR_USER>/<REPO>/branches/main/protection --input /tmp/prot.json | cat
If you'd like stricter rules (enforce admin rules, require CI contexts, or restrict push access to certain teams), I can update the policy accordingly — tell me what you want and I'll apply it.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vectalab-0.1.0.tar.gz.
File metadata
- Download URL: vectalab-0.1.0.tar.gz
- Upload date:
- Size: 99.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2df90939fc0454a54a2cb521aa8add7c6cc4075771a85d086b07771009e47739
|
|
| MD5 |
ac363bbc00a26f8631113f090e7a9672
|
|
| BLAKE2b-256 |
c98d4c9106f51757968cd0ffc6cae993d70a3787b75226fcacc3765bca6ef241
|
Provenance
The following attestation bundles were made for vectalab-0.1.0.tar.gz:
Publisher:
publish.yml on raphaelmansuy/vectalab
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
vectalab-0.1.0.tar.gz -
Subject digest:
2df90939fc0454a54a2cb521aa8add7c6cc4075771a85d086b07771009e47739 - Sigstore transparency entry: 732255635
- Sigstore integration time:
-
Permalink:
raphaelmansuy/vectalab@7a275047f4fab280766e05b8ad646acd07ff05cf -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/raphaelmansuy
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@7a275047f4fab280766e05b8ad646acd07ff05cf -
Trigger Event:
push
-
Statement type:
File details
Details for the file vectalab-0.1.0-py3-none-any.whl.
File metadata
- Download URL: vectalab-0.1.0-py3-none-any.whl
- Upload date:
- Size: 98.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1f63c03a03ea2824638149ce821ec4d262fd523645efe1362fcd2f91dcd6dbb8
|
|
| MD5 |
156eb81ad7b5e240eb74c6d736f3f978
|
|
| BLAKE2b-256 |
a3949a5f9b3ad9106584ab9e391d1743f561908027a823f80503f2cf1f814eac
|
Provenance
The following attestation bundles were made for vectalab-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on raphaelmansuy/vectalab
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
vectalab-0.1.0-py3-none-any.whl -
Subject digest:
1f63c03a03ea2824638149ce821ec4d262fd523645efe1362fcd2f91dcd6dbb8 - Sigstore transparency entry: 732255636
- Sigstore integration time:
-
Permalink:
raphaelmansuy/vectalab@7a275047f4fab280766e05b8ad646acd07ff05cf -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/raphaelmansuy
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@7a275047f4fab280766e05b8ad646acd07ff05cf -
Trigger Event:
push
-
Statement type: