Skip to main content

HTTP API for LLM with OpenAI compatibility

Project description

llm-http-api

HTTP API for LLM with OpenAI compatibility

Usage

> llm http-api --help
Usage: llm http-api [OPTIONS]

  Run a FastAPI HTTP server with OpenAI compatibility

Options:
  -h, --host TEXT       [default: 0.0.0.0]
  -p, --port INTEGER    [default: 8080]
  -l, --log-level TEXT  [default: info]
  --help                Show this message and exit.
> curl http://localhost:8080/v1/embeddings -X POST -H "Content-Type: application/json" -d '{
  "input": "Hello world",
  "model": "jina-embeddings-v2-small-en"
}'
{"object":"embedding","embedding":[-0.47561466693878174,-0.4471365511417389,...],"index":0}

Supported OpenAI Endpoints

Models

Embeddings

Unsupported OpenAI Endpoints

A detailed list of unimplemented OpenAI endpoints can be found here

Development

This repository manages the dev environment as a Nix flake and requires Nix to be installed

nix develop -c $SHELL
make deps.install
make deps.install/test
make test
make coverage
make lint
make format

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-http-api-0.0.2.tar.gz (8.3 kB view hashes)

Uploaded Source

Built Distribution

llm_http_api-0.0.2-py3-none-any.whl (12.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page