A OpenAI API compatible REST server for llama.
Project description
🎭🦙 llama-api-server
This project is under active deployment. Breaking changes could be made any time.
Llama as a Service! This project try to build a REST-ful API server compatible to OpenAI API using open source backends like llama/llama2.
With this project, many common GPT tools/framework can compatible with your own model.
🚀Get started
Prepare model
llama.cpp
If you you don't have quantized llama.cpp, you need to follow instruction to prepare model.
pyllama
If you you don't have quantize pyllama, you need to follow instruction to prepare model.
Install
Use following script to download package from PyPI and generates model config file config.yml
and security token file tokens.txt
.
pip install llama-api-server
# to run wth pyllama
pip install llama-api-server[pyllama]
cat > config.yml << EOF
models:
completions:
# completions and chat_completions use same model
text-ada-002:
type: llama_cpp
params:
path: /absolute/path/to/your/7B/ggml-model-q4_0.bin
text-davinci-002:
type: pyllama_quant
params:
path: /absolute/path/to/your/pyllama-7B4b.pt
text-davinci-003:
type: pyllama
params:
ckpt_dir: /absolute/path/to/your/7B/
tokenizer_path: /absolute/path/to/your/tokenizer.model
# keep to 1 instance to speed up loading of model
embeddings:
text-embedding-davinci-002:
type: pyllama_quant
params:
path: /absolute/path/to/your/pyllama-7B4b.pt
min_instance: 1
max_instance: 1
idle_timeout: 3600
text-embedding-ada-002:
type: llama_cpp
params:
path: /absolute/path/to/your/7B/ggml-model-q4_0.bin
EOF
echo "SOME_TOKEN" > tokens.txt
# start web server
python -m llama_api_server
# or visible across the network
python -m llama_api_server --host=0.0.0.0
Call with openai-python
export OPENAI_API_KEY=SOME_TOKEN
export OPENAI_API_BASE=http://127.0.0.1:5000/v1
openai api completions.create -e text-ada-002 -p "hello?"
curl -X POST http://127.0.0.1:5000/v1/embeddings -H 'Content-Type: application/json' -d '{"model":"text-embedding-ada-002", "input":"It is good."}' -H "Authorization: Bearer SOME_TOKEN"
🛣️Roadmap
Tested with
- openai-python
- OPENAI_API_TYPE=default
- OPENAI_API_TYPE=azure
- llama-index
Supported APIs
- Completions
- set
temperature
,top_p
, andtop_k
- set
max_tokens
- set
echo
- set
stop
- set
stream
- set
n
- set
presence_penalty
andfrequency_penalty
- set
logit_bias
- set
- Embeddings
- batch process
- Chat
- Prefix cache for chat
- List model
Supported backends
- llama.cpp via llamacpp-python
- llama via pyllama
- Without Quantization
- With Quantization
- Support LLAMA2
Others
- Performance parameters like
n_batch
andn_thread
- Token auth
- Documents
- Intergration tests
- A tool to download/prepare pretrain model
- Make config.ini and token file configable
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for llama_api_server-0.3.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | db2cfa0241a4a18090f0ba945f277292ea1be0015fb37240c078d4d7b58b9493 |
|
MD5 | ebac0611b7d514ebce51b9999169f2bb |
|
BLAKE2b-256 | 9887e6cb3d10da98f462736505699ee7c0973fb5f1ab9c6b1f89bae52396b270 |