Skip to main content

A Python library for zero-knowledge proof generation and verification

Project description

Bagel Logo

GitHub stars

Twitter Follow Substack Follow

ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification

Table of Contents

Low-Rank Adaptation (LoRA) is a widely adopted method for customizing large-scale language models. In distributed, untrusted training environments, an open source base model user may want to use LoRA weights created by an external contributor, leading to two requirements:

  1. Base Model User Verification: The user must confirm that the LoRA weights are effective when paired with the intended base model.
  2. LoRA Contributor Protection: The contributor must keep their proprietary LoRA weights private until compensation is assured.

ZKLoRA is a zero-knowledge verification protocol that relies on polynomial commitments, succinct proofs, and multi-party inference to verify LoRA–base model compatibility without exposing LoRA weights.

Key Performance Results

Our benchmarks show:

  • Verification time of 1-2 seconds per LoRA module
  • Practical scaling with number of LoRA modules (e.g., 80+ modules for 70B parameter models)
  • Efficient handling of varying LoRA sizes (from 24K to 327K parameters per module)

Multi-Party Inference (MPI) Architecture

In our multi-party inference scenario:

  • User A (LoRA contributor) holds LoRA-augmented submodules
  • User B (base model user) has the large base model
  • They collaborate on inference while keeping LoRA computations hidden
  • A generates zero-knowledge proofs of computation correctness
  • B can verify these proofs offline using provided artifacts

Quick Usage Instructions

1. LoRA Provider Side (User A)

Use lora_contributor_sample_script.py to:

  • Host LoRA submodules
  • Handle inference requests
  • Generate proof artifacts
import argparse
import threading
import time

from zklora import LoRAServer, AServerTCP

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--host", default="127.0.0.1")
    parser.add_argument("--port_a", type=int, default=30000)
    parser.add_argument("--base_model", default="distilgpt2")
    parser.add_argument("--lora_model_id", default="ng0-k1/distilgpt2-finetuned-es")
    parser.add_argument("--out_dir", default="a-out")
    args = parser.parse_args()

    stop_event = threading.Event()
    server_obj = LoRAServer(args.base_model, args.lora_model_id, args.out_dir)
    t = AServerTCP(args.host, args.port_a, server_obj, stop_event)
    t.start()

    try:
        while True:
            time.sleep(1)
    except KeyboardInterrupt:
        print("[A-Server] stopping.")
    stop_event.set()
    t.join()

if __name__ == "__main__":
    main()

2. Base Model User Side (User B)

Use base_model_user_sample_script.py to:

  • Load and patch the base model
  • Connect to A's submodules
  • Perform inference
  • Trigger proof generation
import argparse

from zklora import BaseModelClient

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--host_a", default="127.0.0.1")
    parser.add_argument("--port_a", type=int, default=30000)
    parser.add_argument("--base_model", default="distilgpt2")
    parser.add_argument("--combine_mode", choices=["replace","add_delta"], default="add_delta")
    args = parser.parse_args()

    client = BaseModelClient(args.base_model, args.host_a, args.port_a, args.combine_mode)
    client.init_and_patch()

    # Run inference => triggers remote LoRA calls on A
    text = "Hello World, this is a LoRA test."
    loss_val = client.forward_loss(text)
    print(f"[B] final loss => {loss_val:.4f}")

    # End inference => A finalizes proofs offline
    client.end_inference()
    print("[B] done. B can now fetch proof files from A and verify them offline.")

if __name__=="__main__":
    main()

3. Proof Verification

Use verify_proofs.py to validate the proof artifacts:

#!/usr/bin/env python3
"""
Verify LoRA proof artifacts in a given directory.

Example usage:
  python verify_proofs.py --proof_dir a-out --verbose
"""

import argparse
from zklora import batch_verify_proofs

def main():
    parser = argparse.ArgumentParser(
        description="Verify LoRA proof artifacts in a given directory."
    )
    parser.add_argument(
        "--proof_dir",
        type=str,
        default="proof_artifacts",
        help="Directory containing proof files (.pf), plus settings, vk, srs."
    )
    parser.add_argument(
        "--verbose",
        action="store_true",
        help="Print more details during verification."
    )
    args = parser.parse_args()

    total_verify_time, num_proofs = batch_verify_proofs(
        proof_dir=args.proof_dir,
        verbose=args.verbose
    )
    print(f"Done verifying {num_proofs} proofs. Total time: {total_verify_time:.2f}s")

if __name__ == "__main__":
    main()

Summary

  • ZKLoRA enables trust-minimized LoRA verification through zero-knowledge proofs
  • Achieves 1-2 second verification per module, even for billion-parameter models
  • Supports multi-party inference with secure activation exchange
  • Maintains complete privacy of LoRA weights while ensuring compatibility
  • Scales efficiently to handle multiple LoRA modules in production environments

Future work includes adding polynomial commitments for base model activations and supporting multi-contributor LoRA scenarios.

Credits

ZKLoRA builds upon several excellent open source libraries:

  • PEFT: Parameter-Efficient Fine-Tuning library by Hugging Face
  • Transformers: State-of-the-art Natural Language Processing by Hugging Face
  • dusk-merkle: Merkle tree implementation in Rust
  • BLAKE3: Cryptographic hash function
  • EZKL: Zero-knowledge proof system for neural networks
  • ONNX Runtime: Cross-platform ML model inference

We are grateful to the maintainers and contributors of these projects for their valuable work.

License

This project is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License - see the LICENSE file for details. This means you are free to use, share, and adapt the work for non-commercial purposes, as long as you give appropriate credit and distribute your contributions under the same license.

License: CC BY-NC-SA 4.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zklora-0.1.0.tar.gz (614.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zklora-0.1.0-py3-none-any.whl (15.7 kB view details)

Uploaded Python 3

File details

Details for the file zklora-0.1.0.tar.gz.

File metadata

  • Download URL: zklora-0.1.0.tar.gz
  • Upload date:
  • Size: 614.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.11.10

File hashes

Hashes for zklora-0.1.0.tar.gz
Algorithm Hash digest
SHA256 878b621b19814c9ce394940eceb041443669b37ac96b6f0349cedd5c0cb84e36
MD5 0cb376191fd95700cc8b6921e20e00fb
BLAKE2b-256 aca8281590675ade14f0ca879f49a590ef32f792df9343c2910df9e60ec49e05

See more details on using hashes here.

File details

Details for the file zklora-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: zklora-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 15.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.11.10

File hashes

Hashes for zklora-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d2dc3848bc255479590a1668e1c8fd536384da000a8b5c08f3638cd3cf484550
MD5 61bba5840a54369e591e7e89f5d80152
BLAKE2b-256 5087d15d23dbb8bb79a5db02ccd84e3959d09349d418a868ab55d21628ea2054

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page