To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
Project description
The author of this package has not provided a project description
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llmlingua_promptflow-0.0.1.tar.gz
.
File metadata
- Download URL: llmlingua_promptflow-0.0.1.tar.gz
- Upload date:
- Size: 17.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 472b2257e2e67924228232066bd8523d92f73130865c2f062944418669686100 |
|
MD5 | 401bf6183f45a7c5533839b73292eaa1 |
|
BLAKE2b-256 | 91272c18dc293f2f28cf18755de49ee732e80ab87daac4f9ed2fbc8bb734e2bd |
File details
Details for the file llmlingua_promptflow-0.0.1-py3-none-any.whl
.
File metadata
- Download URL: llmlingua_promptflow-0.0.1-py3-none-any.whl
- Upload date:
- Size: 18.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8a2e15c61482418a9d772e7c33f41d710ca574426aaf33946f3e43cae402aa79 |
|
MD5 | 5a580f8cd4dbd0bd5abfb0a9af12ad33 |
|
BLAKE2b-256 | 7dd2011daff14477204efbde48f04dd0bb0aa38b5ade9a7ccac5d5a7173aedea |