Memory Efficient Variant of Adam
Project description
Adam-mini
A PyTorch implementation of Adam-mini, a mini-version of Adam that achieves on-par or better performance than AdamW with 45% to 50% less memory footprint.
How to use
from Adam_mini import Adam_mini
optimizer = Adam_mini(
named_parameters = model.named_parameters(),
lr = lr,
betas = (beta1,beta2),
eps = eps,
weight_decay = weight_decay,
dim = model_config.dim,
n_heads = model_config.n_heads,
n_kv_heads = model_config.n_kv_heads,
)
Regarding all the hyperparameters including learning rate (lr), weight_decay, beta1, beta2, eps, we recommend using the same values as those used for AdamW.
If you are training Transformers, please pass the following info to Adam-mini:
-
dim: dimension for hidden feature. Could be unspecified if you are training non-transformer models.
-
n_heads: number of attention heads. Could be unspecified if you are training non-transformer models.
-
n_kv_heads: number of head for Key and Value. Or equivalently, number of query groups in Group query Attention. Also known as "n_query_groups". If is None, it will be the same value as n_head. Could be unspecified if you are training non-transformer models.
Citation
If you find this code helpful, please cite our paper in the following format.
@article{zhang2024adam,
title = {Adam-mini: Use Fewer Learning Rates To Gain More},
author = {Zhang, Yushun and Chen, Congliang and Li, Ziniu and Ding, Tian and Wu, Chenwei and Ye, Yinyu and Luo, Zhi-Quan and Sun, Ruoyu},
booktitle = {arXiv preprint arXiv:2406.16793},
year = {2024},
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for adam_mini-1.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ac29c5c64faf96244b2f4737ce3b642f173edc61f148370e09e090076381450c |
|
MD5 | a6a14e8a073220cec2909e0529f97940 |
|
BLAKE2b-256 | 38cb2a3891d0bebce6035e782ba45d6b7daa4882cb5af26386762740bbc3c773 |