Intel-optimized version of bitsandbytes
Project description
bitsandbytes Intel Backend
Registration for Intel optimized bitsandbytes operators.
Quick Start
# Build and enter container
docker compose run --build --rm bnb-intel-dev /bin/bash
# Run validation (inside container):
python -m bitsandbytes_intel
Testing
Expected successful output:
root@pvc-hf-1100-00:/workspace# python -m bnb_intel
Initializing bnb_intel module
[W414 18:23:28.291667720 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be overridden.
Overriding a previously registered kernel for the same operator and the same dispatch key
operator: aten::_validate_compressed_sparse_indices(bool is_crow, Tensor compressed_idx, Tensor plain_idx, int cdim, int dim, int nnz) -> ()
registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6
dispatch key: XPU
previous kernel: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:30477
new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:468 (function operator())
2025-04-14 18:23:29,577 - bitsandbytes.cextension - WARNING - The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
Loading ops module
ops module loaded
Registering XPU implementations
Successfully registered XPU implementation
Registering HPU implementations
Successfully registered HPU implementations
🧪 Running minimal XPU backend test...
int8_linear_matmul_xpu called with tensors of shape: torch.Size([32, 64]) torch.Size([128, 64])
✅ Operator executed successfully!
Input shapes: torch.Size([32, 64]) x torch.Size([128, 64])
Output shape: torch.Size([32, 128])
Output device: xpu:0
[W414 18:23:30.825181068 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be overridden.
Overriding a previously registered kernel for the same operator and the same dispatch key
operator: aten::_validate_compressed_sparse_indices(bool is_crow, Tensor compressed_idx, Tensor plain_idx, int cdim, int dim, int nnz) -> ()
registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6
dispatch key: XPU
previous kernel: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:30477
new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:468 (function operator())
Technical Implementation
Key files:
src/bitsandbytes_intel/ops.py- Intel kernel registrationsrc/bitsandbytes_intel/__init__.py- Autoload setupdocker-compose.yml- Build environmentsetup.py- Package configuration
Uses PyTorch's autoload mechanism to register:
@torch.library.impl("bitsandbytes::int8_linear_matmul", "XPU")
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bitsandbytes_intel-0.0.1.dev0.tar.gz.
File metadata
- Download URL: bitsandbytes_intel-0.0.1.dev0.tar.gz
- Upload date:
- Size: 12.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5514e272c34116369dea2f5ed85a497e22f1aa4fec2d4796b8bc6b9cd63faf8f
|
|
| MD5 |
ae751816ae0205327c60bda5c9914f36
|
|
| BLAKE2b-256 |
5ce773dbb32c732b4cadaeddc8abe84371c71c39550ef549dfa969a7b980cad4
|
Provenance
The following attestation bundles were made for bitsandbytes_intel-0.0.1.dev0.tar.gz:
Publisher:
package.yml on bitsandbytes-foundation/bitsandbytes-intel
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
bitsandbytes_intel-0.0.1.dev0.tar.gz -
Subject digest:
5514e272c34116369dea2f5ed85a497e22f1aa4fec2d4796b8bc6b9cd63faf8f - Sigstore transparency entry: 196949138
- Sigstore integration time:
-
Permalink:
bitsandbytes-foundation/bitsandbytes-intel@a3dd40931c81ce651c63a9169d648409298b937f -
Branch / Tag:
refs/heads/main - Owner: https://github.com/bitsandbytes-foundation
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
package.yml@a3dd40931c81ce651c63a9169d648409298b937f -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file bitsandbytes_intel-0.0.1.dev0-py3-none-any.whl.
File metadata
- Download URL: bitsandbytes_intel-0.0.1.dev0-py3-none-any.whl
- Upload date:
- Size: 11.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd41e27c40851fdbda23c3b389b9b9b7c5517925b38fe91fbf3d931353b5a04e
|
|
| MD5 |
f88d629e153464e7d5402363d8f88ea4
|
|
| BLAKE2b-256 |
961dff2d949abb566174c50404e444464809d5f5d27ecc05b33c7058ce70f5fe
|
Provenance
The following attestation bundles were made for bitsandbytes_intel-0.0.1.dev0-py3-none-any.whl:
Publisher:
package.yml on bitsandbytes-foundation/bitsandbytes-intel
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
bitsandbytes_intel-0.0.1.dev0-py3-none-any.whl -
Subject digest:
cd41e27c40851fdbda23c3b389b9b9b7c5517925b38fe91fbf3d931353b5a04e - Sigstore transparency entry: 196949141
- Sigstore integration time:
-
Permalink:
bitsandbytes-foundation/bitsandbytes-intel@a3dd40931c81ce651c63a9169d648409298b937f -
Branch / Tag:
refs/heads/main - Owner: https://github.com/bitsandbytes-foundation
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
package.yml@a3dd40931c81ce651c63a9169d648409298b937f -
Trigger Event:
workflow_dispatch
-
Statement type: