Skip to main content

An resource-efficient fine-tuning toolkit that tunes model via subnet-structured optimization, localization, and integration.

Project description

Low-Resources Subnet Integration Adaptation (LoSiA)

The repository contains early release of the code and deployment instructions for the paper LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization.

💡News | 🫧Method | 🛠️Usage | 📚Citation

💡News

  • August 2024: LoSiA is accepted to EMNLP 2025 Main Conference!
  • June 2024: The first release of open source code

🫧Method

LoSiA (Low-Resources Subnet Integration Adaptation) is a novel Parameter-Efficient Fine-Tuning (PEFT) framework that dynamically identifies and optimizes critical sub-networks within LLMs, enabling parameter-efficient full-rank fine-tuning with low latency.

intro

Subnet Localization

The identification stage of core sub-networks is mainly composed by :

  • Calculation of parameter importance scores via sensitivity-based metrics
  • Usage of greedy algorithms to select optimal input/output neuron subsets

Subnet Optimization

We design a novel mechanism to organize the optimizations of multiple layers. Specifically, LoSiA:

  • Fine-tunes only the identified core subnets instead of full layers
  • Implements asynchronous periodic re-localization to adapt to dynamic training patterns
  • Applies learning rate rewarming during subnet updates for more stable training

Efficient Implementation (LoSiA-Pro)

LoSiA-Pro is a equivalent but refined implementation for LoSiA, boosting training efficiency and lowering GPU memory consumptions. It includes:

  • Reduction of activation storage by only saving subnet activations
  • Replacing full gradient computation with low rank matrix multiplication in back-propagation

🛠️Usage

Environment Setup

Setting the training environment directly by following commands:

conda create -n losia python=3.8
conda activate losia
cd LoSiA
pip install -r requirements.txt
pip install flash_attn

If the g++ toolchain is absent from the environment (usually reported by ninja):

conda install -c conda-forge gxx_linux-64
conda install -c conda-forge libxcrypt

The experiment scripts are tested on Python 3.8 with PyTorch 2.4.1+cu121 and the CUDA version is 12.4.

Attach LoSiA Optimizer to Backbone

To adopt LoSiA as the optimizer, use attach_losia in optmizer.py. This function will create layer-wise parameter groups and LoSiA optimizers, and register post hook for per-layer weight updates. An example of the usage is as below:

import os
import torch
from transformers import AutoConfig, AutoModelForCausalLM
from optimizer import attach_losia

model_path = "/HDD_DATA/HDD_HOME/wangxj/models/Llama-2-7b-hf"
model_config = AutoConfig.from_pretrained(os.path.join(model_path, "config.json"))
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2").cuda()

for n,p in model.named_parameters():
    p.requires_grad = True

attach_losia(
    model, model_config, 
    num_training_steps = 10000, 
    lr = 3e-5, 
    rank_factor = 1.0/8.0, 
    period = 100
)

Model Training

After attaching LoSiA to the backbone model, learning rate scheduler and parameter gradients will be automatically managed by LoSiA optimizers in post backward hooks. Calling backward function for every iteration is enough for training:

for epoch in range(args.epochs):
        for batch_idx, batch in enumerate(dataloader):
            loss = model(**batch, labels=labels).loss
            loss.backward()

For more details, please refer to torchrun_main.py and optimizer.py.

Run with Scripts

In this repository we take LLaMA-2 7B fine-tuning as an example. You can download the backbone model by url meta-llama/Llama-2-7b-hf. Training scripts lies under /scripts folder. Run by following commands:

cd scripts
bash run_losia.sh

This script will run training on eight common-sense reasoning tasks. For example, train social_i_qa with the following command:

# This takes about 19GB of GPU memory 
torchrun --standalone --nproc_per_node 1 $(dirname "$0")/../torchrun_main.py \
        --model_path meta-llama/Llama-2-7b-hf \
        --dataset_name siqa \
        --dataset_path allenai/social_i_qa \
        --save_dir LLaMA-2-7B-SIQA \
        --lr 5e-5 \
        --batch_size 16 \
        --rank_factor 0.125 \ # rank factor p, controling the scale of core subnet
        --period 50 \ # time slot T for reselection
        --max_length 256 \
        --epochs 3 \
        --pad_to_max_len \
        --warmup_steps_ratio 0.1 \
        --grad_clipping 1.0 \
        --dtype bfloat16 \
        --single_gpu \
        --scheduler cosine_restarts \
        --optimizer losia_adamw_per_layer \
        
        # --activation_checkpointing \ # enable gradient checkpointing
        # --use_pro # train with losia-pro

LoSiA is developed based on the training framework of GaLore. Furthermore, We use EleutherAI/lm-evaluation-harness for evaluation, please follow instructions in the link for deployment.

📚 Citation

If this work is found to be helpful to you, we would greatly appreciate your citation.

@misc{wang2025losiaefficienthighrankfinetuning,
      title={LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization}, 
      author={Xujia Wang and Yunjia Qi and Bin Xu},
      year={2025},
      eprint={2507.04487},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2507.04487}, 
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

losia-0.1.5.tar.gz (25.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

losia-0.1.5-py3-none-any.whl (25.2 kB view details)

Uploaded Python 3

File details

Details for the file losia-0.1.5.tar.gz.

File metadata

  • Download URL: losia-0.1.5.tar.gz
  • Upload date:
  • Size: 25.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.8.20

File hashes

Hashes for losia-0.1.5.tar.gz
Algorithm Hash digest
SHA256 1bc324f9e096513ef1a71def3e86e7ac13a71adf010e965a979930174f73f268
MD5 feaa9da3397a18803980079349f31402
BLAKE2b-256 a4d9eb19e1592a2b48803c753379c4955007cbb6a1352e88996c0ee567368486

See more details on using hashes here.

File details

Details for the file losia-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: losia-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 25.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.8.20

File hashes

Hashes for losia-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 2cd65cafe35b9177eccfcbedebf93ebcd2ed29eeaf5bdb4e2b5272bd41298c36
MD5 4997d3f24797a548494c2b3ae78d3f1d
BLAKE2b-256 3d36225b862c425542ebea80c34f4c29adc0f8f55c90f497409970e6e1d99ad4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page