A OpenAI API compatible REST server for llama.
Project description
🎭🦙 llama-api-server
This project is under active deployment. Breaking changes could be made any time.
Llama as a Service! This project try to build a REST-ful API server compatible to OpenAI API using open source backends like llama.
🚀Get started
Prepare model
llama.cpp
If you you don't have quantized llama.cpp, you need to follow instruction to prepare model.
pyllama
If you you don't have quantize pyllama, you need to follow instruction to prepare model.
Install
Use following script to download package from PyPI and generates model config file config.yml
and security token file tokens.txt
.
pip install llama-api-server
# to run wth pyllama
pip install llama-api-server[pyllama]
echo > config.yml << EOF
models:
completions:
text-ada-002:
type: llama_cpp
params:
path: /absolute/path/to/your/7B/ggml-model-q4_0.bin
text-davinci-002:
type: pyllama_quant
params:
path: /absolute/path/to/your/pyllama-7B4b.pt
text-davinci-003:
type: pyllama
params:
ckpt_dir: /absolute/path/to/your/7B/
tokenizer_path: /absolute/path/to/your/tokenizer.model
# keep to 1 instance to speed up loading of model
embeddings:
text-embedding-davinci-002:
type: pyllama_quant
params:
path: /absolute/path/to/your/pyllama-7B4b.pt
min_instance: 1
max_instance: 1
idle_timeout: 3600
text-embedding-ada-002:
type: llama_cpp
params:
path: /absolute/path/to/your/7B/ggml-model-q4_0.bin
EOF
echo "SOME_TOKEN" > tokens.txt
# start web server
python -m llama_api_server
Call with openai-python
export OPENAI_API_KEY=SOME_TOKEN
export OPENAI_API_BASE=http://127.0.0.1:5000/v1
openai api completions.create -e text-ada-002 -p "hello?"
curl -X POST http://127.0.0.1:5000/v1/embeddings -H 'Content-Type: application/json' -d '{"model":"text-embedding-ada-002", "input":"It is good."}' -H "Authorization: Bearer SOME_TOKEN"
🛣️Roadmap
Tested with
- openai-python
- OPENAI_API_TYPE=default
- OPENAI_API_TYPE=azure
- llama-index
Supported APIs
- Completions
- set
temperature
,top_p
, andtop_k
- set
max_tokens
- set
echo
- set
stop
- set
stream
- set
n
- set
presence_penalty
andfrequency_penalty
- set
logit_bias
- set
- Embeddings
- batch process
- Chat
- List model
Supported backends
- llama.cpp via llamacpp-python
- llama via pyllama
- Without Quantization
- With Quantization
Others
- Performance parameters like
n_batch
andn_thread
- Token auth
- Documents
- Intergration tests
- A tool to download/prepare pretrain model
- Make config.ini and token file configable
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llama_api_server-0.2.3.tar.gz
(10.8 kB
view hashes)
Built Distribution
Close
Hashes for llama_api_server-0.2.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 383a3d649bca50f49b2933d8313e58f14515f541f3145e9df42cc8c82b511c78 |
|
MD5 | d2a6db557f73ae8e35bb347a9076be67 |
|
BLAKE2b-256 | 282109a58bfbaad3532182427848656b3887c29740a18d789f46d7d8feb94833 |