Plugin-based federated learning framework for Vision Language Models (VLMs)
Project description
FedOps VLM Framework
This is a separate framework-oriented folder (not a single experiment run).
Goal
Provide a reusable Flower-based VLM framework with:
- model plugins (
onevision,phiva) - dataset plugins (
vqav2,vqa_rad, and future multimodal sets) - deployment plugins (
mlc_compatible,onevision_research) - runtime backends (
mlc,llama_cpp) from one shared export request schema
Current status
- Framework CLI + plugin registry + runtime planning is active.
- Existing project tracks remain in:
/home/ccl/Desktop/akeel_folder/MMFL_Flower/fedops-vlm/projects/onevision-research/home/ccl/Desktop/akeel_folder/MMFL_Flower/fedops-vlm/projects/mlc-compatible
Quick start
cd /home/ccl/Desktop/akeel_folder/MMFL_Flower/fedops-vlm/fedops-vlm-framework
source /home/ccl/Desktop/akeel_folder/MMFL_Flower/akeel_research_env/bin/activate
pip install -e .
python -m fedops_vlm_framework.cli --help
List runtime backends:
python -m fedops_vlm_framework.cli --list-runtimes
Generate a full end-to-end setup bundle (recommended first step):
python -m fedops_vlm_framework.cli --setup-e2e --track mlc-compatible
Or for OneVision research:
python -m fedops_vlm_framework.cli --setup-e2e --track onevision-research
This creates a timestamped folder under fedops-vlm/exports/ containing:
manifest.jsonREADME_NEXT_STEPS.txtscripts/00_verify_env.shscripts/01_train_fl.shscripts/02_export_merged.shscripts/03_generate_runtime_plans.shscripts/04_run_mlc_pipeline.shscripts/05_collect_a24_metrics.sh
Generate an MLC export plan (runtime-agnostic interface -> runtime-specific commands):
python -m fedops_vlm_framework.cli \
--export-plan-runtime mlc \
--base-model nota-ai/phiva-4b-hf \
--adapter-dir /path/to/results/<run>/adapter_10 \
--output-dir /path/to/exports/phiva_a24 \
--device-profile samsung_a24 \
--quantization q4f16_0 \
--context-window-size 768 \
--image-size 224
Generate the same plan from a JSON config (recommended for repeatability):
python -m fedops_vlm_framework.cli \
--plan-config examples/plan_config.phiva.mlc.json \
--save-plan /tmp/plan_mlc.json
Generate a llama.cpp export plan:
python -m fedops_vlm_framework.cli \
--export-plan-runtime llama_cpp \
--base-model nota-ai/phiva-4b-hf \
--adapter-dir /path/to/results/<run>/adapter_10 \
--output-dir /path/to/exports/phiva_a24 \
--runtime-option gguf_quant=Q4_K_M
Or config-driven:
python -m fedops_vlm_framework.cli \
--plan-config examples/plan_config.phiva.llama_cpp.json \
--save-plan /tmp/plan_llama_cpp.json
Adding new models/datasets/runtimes
No large refactor is needed for normal extension:
- Add one plugin file under
fedops_vlm_framework/plugins/{models|datasets|runtime}. - Register it in
core/registry.py(models/datasets/deploy) orcore/runtime_registry.py(runtime). - Use CLI planning with either flags or
--plan-config.
Suggested next step
Port one track first (mlc-compatible) into this framework by wiring:
- model plugin:
phiva - dataset plugin:
vqav2 - deployment plugin:
mlc_compatible - runtime backend plan:
mlcfirst, thenllama_cppfor comparison
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fedops_vlm_framework-0.1.0-py3-none-any.whl.
File metadata
- Download URL: fedops_vlm_framework-0.1.0-py3-none-any.whl
- Upload date:
- Size: 35.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.20
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a1a3375bb2be614b360ac9a924952641f89d73e65f6614460f0bca7bec0e1af5
|
|
| MD5 |
2be57470d5e305fba8b832de417b97f9
|
|
| BLAKE2b-256 |
80c4d0a419606802a4a1b571338b3c9bfe1dfc32ec03c037908926186664f87b
|