CAMEL: Context-Aware Modifier for Efficient Language model
Project description
CAMEL
Introduction
CAMEL(Context-Aware Modifier for Efficient Language model) is a speculative decoding method inspired by EAGLE. It compresses former input hidden states according to window size and then make speculations.
Installation
pip install modifier
Quick Start
CAMEL only supports meta-llama/Llama-2-7b-chat-hf currently.
import torch
from camel import CamelModel
prompt = "What is artificial intelligence?"
model = CamelModel.from_pretrained(
base_model_path="meta-llama/Llama-2-7b-chat-hf",
modifier_path="0xWe11es/camel-llama2-h1024-w1",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = model.get_tokenizer()
input_ids = tokenizer(prompt).input_ids
output_ids = model.generate(input_ids)
output = tokenizer.decode(output_ids)
print(output)
CAMEL has the following modifier based on Llama2 (h stands for hidden size, w stands for window size):
- 0xWe11es/camel-llama2-h256-w1
- 0xWe11es/camel-llama2-h256-w4
- 0xWe11es/camel-llama2-h256-w16
- 0xWe11es/camel-llama2-h256-w64
- 0xWe11es/camel-llama2-h1024-w1
- 0xWe11es/camel-llama2-h1024-w4
- 0xWe11es/camel-llama2-h1024-w16
- 0xWe11es/camel-llama2-h1024-w64
Performance
We test modifier 0xWe11es/camel-llama2-h1024-w4 on several datasets, and get the following results compared to vanilla model (hf version).
| Dataset | Model | Temperature | Speed(Token/s) | Speedup |
|---|---|---|---|---|
| MT-Bench | LlaMa2 7B | 0.0 | 71.85 | 1.92x |
| MT-Bench | LlaMa2 7B | 1.0 | 57.54 | 1.62x |
| GSM8K | LlaMa2 7B | 0.0 | 73.51 | 2.20x |
| GSM8K | LlaMa2 7B | 1.0 | 57.15 | 1.77x |
| Alpaca | LlaMa2 7B | 0.0 | 68.92 | 1.88x |
| Alpaca | LlaMa2 7B | 1.0 | 55.38 | 1.56x |
Reference
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file modifier-0.0.3.tar.gz.
File metadata
- Download URL: modifier-0.0.3.tar.gz
- Upload date:
- Size: 37.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.11.3 Linux/6.5.0-35-generic
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
05dbbb812ab298f373070a9829dd728529a3978e12f9cf7efe623693e096e425
|
|
| MD5 |
5df81763dc3b575e878c1ebc5830ee9b
|
|
| BLAKE2b-256 |
2889a5938a2b42b76eb3375f0277a58469cab6c3a975973fe98cbc27c5e023cc
|
File details
Details for the file modifier-0.0.3-py3-none-any.whl.
File metadata
- Download URL: modifier-0.0.3-py3-none-any.whl
- Upload date:
- Size: 40.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.11.3 Linux/6.5.0-35-generic
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
39deddf2c6f8e628b910f01c94336011f42dbd7534ac51549f4d9804c77c2ceb
|
|
| MD5 |
55477e62ebb54e74a85d75581d9be33d
|
|
| BLAKE2b-256 |
94dc62859e470fb154a880b087afab7a7cea6fb91fed63999aa702d2e275ef09
|