A simple and efficient python library for fast inference of GGUF Large Language Models.
Project description
ALLM
ALLM is a Python library designed for fast inference of GGUF (Generic Global Unsupervised Features) Large Language Models (LLMs) on both CPU and GPU. It provides a convenient interface for loading pre-trained GGUF models and performing inference using them. This library is ideal for applications where quick response times are crucial, such as chatbots, text generation, and more.
Features
- Efficient Inference: ALLM leverages the power of GGUF models to provide fast and accurate inference.
- CPU and GPU Support: The library is optimized for both CPU and GPU, allowing you to choose the best hardware for your application.
- Simple Interface: With a straightforward command line support, you can easily load models and perform inference with just a single command.
- Flexible Configuration: Customize inference settings such as temperature and model path to suit your needs.
Installation
You can install ALLM using pip:
pip install allm
Usage
You can start inference with a simple 'allm-run' command. The command takes name or path, temperature(optional), max new tokens(optional) and additional model kwargs(optional) as arguments.
allm-run --name model_name_or_path
API
After initialising or downloading the model you can start inference API with a simple 'allm-serve' command. The command starts the API server on the default 127.0.0.1:5000 host. If you want to run the API server on a different port and host, you can customize the apiconfig,txt file in your model directory.
allm-serve
Supported Model names
Llama2, llama, llama2_chat, Llama_chat, Mistral, Mistral_instruct
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file ALLMDEV-1.2.1-py3-none-any.whl
.
File metadata
- Download URL: ALLMDEV-1.2.1-py3-none-any.whl
- Upload date:
- Size: 6.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8eee1c2df71688f8d922d904811767e33b03a1d577119652ec2da08931f1d5b0 |
|
MD5 | 56549cb658319d286d05faa59665f39e |
|
BLAKE2b-256 | 255fd1fb45302219c4425cd54e7bba2a16b49e8d9b344a282037bb1ae5c421f9 |