Skip to main content

An efficent implementation for the paper: "The Era of 1-bit LLMs"

Project description

BitMat: Improving Ternary Matrix Multiplication with Triton

⚠️ We're currently investigating as the method .to() seems not working, hold on tight while we fix it

0️⃣1️⃣ Introduction

BitMat is a Python package designed to optimize matrix multiplication operations by utilizing custom kernels written in Triton. Our package leverages the principles outlined in the "1bit-LLM Era" paper, specifically utilizing packed int8 data to enhance computational efficiency and performance in deep learning and numerical computing tasks.

🎛 Features

Custom Triton Kernels: Utilize highly optimized kernels for matrix multiplication, tailored for performance and efficiency.

Packed int8 Operations: During inference the model uses packed int8 data to reduce memory usage and improve computational efficiency.

Ease of Integration: BitMat is designed to be easily integrated into existing PyTorch/transformers workflows, providing a seamless user experience.

💾 Installation

pip install bitmat-tl

At the moment we only support Linux platforms. Windows installation is possible but is not tested.

🏁 Quick Start

High-level API (tranformers-compatible)

from transformers import AutoModelForCausalLM
from bitmat import convert_hf_model

# Initialize your model from an available hf model
model= AutoModelForCausalLM.from_pretrained("some-repo/some-model")
# Convert the model to use BitLinear layers
model = convert_hf_model(model)
# Save the converted model
model.save_pretrained('some_local_folder')

Loading the converted 1.58Bit Model

To utilize the converted 1.58Bit model, such as a customized version of Mistral in this exmaple, you will need to import the specific model class from the library. Below is an example demonstrating how to load the Mistral158ForCausalLM model from a local directory:

from bitmat import Mistral158ForCausalLM

# Replace 'path_to_your_model' with the actual path to your model's directory
model = Mistral158ForCausalLM.from_pretrained('path_to_your_model')

Once loaded, the model operates in two distinct modes:

  • Evaluation Mode: By default, the model employs quantized weights, optimizing performance for inference tasks. Activate this mode using model.eval().

  • Training Mode: Switching to this mode, via model.train(), allows the model to leverage full-precision weights, which is essential for training and fine-tuning processes, ensuring accurate gradient calculations and effective model updates.

This API is fully compatible with the HuggingFace's Ecosystem

Low-level API

import torch
from bitmat import BitLinear

layer = BitLinear(in_features=1024, out_features=512, bias=True, eps=1e-5)
# You can use the layer as a normal torch.nn.Linear layer

🫱🏼‍🫲🏽 Contributing

We welcome contributions from the community, whether it's adding new features, improving documentation, or reporting bugs. Please refer to our contribution guidelines before making a pull request.

📜 License

BitMat is open-sourced under the Apache-2.0 license.

Citation

If you use BitMat in your research, please cite it using the following Bibtex entry:

@article{bitmat2024,
  title={BitMat: Improving Matrix Multiplication with Custom Triton Kernels},
  author={AstraMind AI},
  journal={https://github.com/astramind-ai/BitMat},
  year={2024}
}

Support

For questions, issues, or support regarding BitMat, please open an issue on our GitHub repository.

Acknowledgments

Special thanks to the Triton community and the authors of the "1bit-LLM Era" paper for their groundbreaking work and inspiration.

Also thanks to the developer of BitDelta and UnSloth since part of the code is based on their work.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bitmat-tl-0.2.5.tar.gz (62.1 kB view details)

Uploaded Source

Built Distribution

bitmat_tl-0.2.5-py3-none-any.whl (65.0 kB view details)

Uploaded Python 3

File details

Details for the file bitmat-tl-0.2.5.tar.gz.

File metadata

  • Download URL: bitmat-tl-0.2.5.tar.gz
  • Upload date:
  • Size: 62.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 colorama/0.4.4 importlib-metadata/4.6.4 keyring/23.5.0 pkginfo/1.8.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.25.1 rfc3986/1.5.0 tqdm/4.57.0 urllib3/1.26.5 CPython/3.10.12

File hashes

Hashes for bitmat-tl-0.2.5.tar.gz
Algorithm Hash digest
SHA256 9d58ec181b5cece79bdf5e9457d3283c38045f7d1a3f0c33a2bf9077b96ad5b3
MD5 87c4840c8c3c31cf3796d0b36f0dfdf0
BLAKE2b-256 d378629a88a1205e403b51608f086a029a415311f4ff4650a607d57df999fc9a

See more details on using hashes here.

File details

Details for the file bitmat_tl-0.2.5-py3-none-any.whl.

File metadata

  • Download URL: bitmat_tl-0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 65.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 colorama/0.4.4 importlib-metadata/4.6.4 keyring/23.5.0 pkginfo/1.8.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.25.1 rfc3986/1.5.0 tqdm/4.57.0 urllib3/1.26.5 CPython/3.10.12

File hashes

Hashes for bitmat_tl-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 7d2010ec078297edd13cee6c3aa40cfdedaba3dee4e9f80976354ebfbbc7374b
MD5 8a65214563177897145661a88852911e
BLAKE2b-256 4772f250b1c1648700d19ed75ede3dc688cfc37d65ee7a5c04217ee0b03b3799

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page