HTTP API for LLM with OpenAI compatibility
Project description
llm-http-api
HTTP API for LLM with OpenAI compatibility
Usage
> llm http-api --help
Usage: llm http-api [OPTIONS]
Run a FastAPI HTTP server with OpenAI compatibility
Options:
-h, --host TEXT [default: 0.0.0.0]
-p, --port INTEGER [default: 8080]
-l, --log-level TEXT [default: info]
--help Show this message and exit.
> curl http://localhost:8080/v1/embeddings -X POST -H "Content-Type: application/json" -d '{
"input": "Hello world",
"model": "jina-embeddings-v2-small-en"
}'
{"object":"embedding","embedding":[-0.47561466693878174,-0.4471365511417389,...],"index":0}
Supported OpenAI Endpoints
Models
Embeddings
Unsupported OpenAI Endpoints
A detailed list of unimplemented OpenAI endpoints can be found here
Development
This repository manages the dev environment as a Nix flake and requires Nix to be installed
nix develop -c $SHELL
make deps.install
make deps.install/test
make test
make coverage
make lint
make format
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llm-http-api-0.0.2.tar.gz
(8.3 kB
view hashes)
Built Distribution
Close
Hashes for llm_http_api-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3b686bdcba1c3d67e1fced71c220011dada2006059231e7e4441ffcbb64327c7 |
|
MD5 | 40a400887f7e47dfd3caca37822c3097 |
|
BLAKE2b-256 | 0dc5f4bacc3f6f26163b539ac11ad77b2b3eb83372f49a58911e214ec9f7bf67 |