Python llama.cpp HTTP Server and LangChain LLM Client
Project description
python-llama-cpp-http
Python HTTP Server and LangChain LLM Client for llama.cpp.
Server has only two routes:
- call: for a prompt get whole text completion at once:
POST
/api/1.0/text/completion
- stream: for a prompt get text chunks via WebSocket:
GET
/api/1.0/text/completion
LangChain LLM Client has support for sync calls only based on Python packages requests
and websockets
.
Install
pip install llama_cpp_http
Manual install
Assumption is that GPU driver, and OpenCL / CUDA libraries are installed.
Make sure you follow instructions from LLAMA_CPP.md
below for one of following:
- CPU - including Apple, recommended for beginners
- OpenCL for AMDGPU/NVIDIA CLBlast
- HIP/ROCm for AMDGPU hipBLAS,
- CUDA for NVIDIA cuBLAS
It is the easiest to start with just CPU-based version of llama.cpp if you do not want to deal with GPU drivers and libraries.
Install build packages
- Arch/Manjaro:
sudo pacman -Sy base-devel python git jq
- Debian/Ubuntu:
sudo apt install build-essential python3-dev python3-venv python3-pip libffi-dev libssl-dev git jq
Clone repo
git clone https://github.com/mtasic85/python-llama-cpp-http.git
cd python-llama-cpp-http
Make sure you are inside cloned repo directory python-llama-cpp-http
.
Setup python venv
python -m venv venv
source venv/bin/activate
python -m ensurepip --upgrade
pip install -U .
Clone and compile llama.cpp
git clone https://github.com/ggerganov/llama.cpp llama.cpp
cd llama.cpp
make -j
Download Meta's Llama 2 7B Model
Download GGUF model from https://huggingface.co/TheBloke/Llama-2-7B-GGUF to local directory models
.
Our advice is to use model https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q2_K.gguf with minimum requirements, so it can fit in both RAM/VRAM.
Run Server
python -m llama_cpp_http.server --backend cpu --models-path ./models --llama-cpp-path ./llama.cpp
Run Client Examples
- Simple text completion call
/api/1.0/text/completion
:
python -B misc/example_client_call.py | jq .
- WebSocket stream
/api/1.0/text/completion
:
python -B misc/example_client_stream.py | jq -R '. as $line | try (fromjson) catch $line'
Licensing
python-llama-cpp-http is licensed under the MIT license. Check the LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_cpp_http-0.1.0.tar.gz
.
File metadata
- Download URL: llama_cpp_http-0.1.0.tar.gz
- Upload date:
- Size: 10.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.6.1 CPython/3.11.5 Linux/6.4.12-arch1-1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 350974893fad94ac18727a0c9e163f9f83d7e509e11aaf5d27444f3b60036061 |
|
MD5 | 4a868276c9b1998b5f5982b1307e2675 |
|
BLAKE2b-256 | db217cc6a2ea01098caa908fba4fda8261c2ed60dea3849bb7bd24cb0282f06b |
File details
Details for the file llama_cpp_http-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: llama_cpp_http-0.1.0-py3-none-any.whl
- Upload date:
- Size: 10.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.6.1 CPython/3.11.5 Linux/6.4.12-arch1-1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | de38fbde6e750cc89a9c00978dab590bd7772dbd447d60600c8872d5c723cc04 |
|
MD5 | 66261481933cbfc1f2e9074f133b22f7 |
|
BLAKE2b-256 | 6ea776adab3464bd8546a5a59d636852bfeab47b21a0233b2085dcf79cd4f8ef |