LLM plugin to expose a FastAPI server with compatible APIs for popular LLM clients
Project description
llm-api
A FastAPI-based server plugin for the llm CLI that exposes LLM models through API interfaces compatible with popular LLM clients.
This allows you to use local or remote LLM models with any client that expects standard LLM API formats.
Installation
As an LLM Plugin
Install this plugin to llm:
# Install from PyPI (once published)
llm install llm-api
# Or install from GitHub
llm install https://github.com/danielcorin/llm-api.git
# Or install from local development directory
cd /path/to/llm-api
llm install -e .
Verify installation:
# Check the plugin is installed
llm plugins
# The 'api' command should be available
llm api --help
Development Installation
For development, use uv:
# Clone the repository
git clone https://github.com/danielcorin/llm-api.git
cd llm-api
# Create a virtual environment and install dependencies
uv venv
source .venv/bin/activate
uv sync --dev
# Install as an editable LLM plugin
llm install -e .
Usage
Start the API server:
llm api --port 8000
The server provides OpenAI Chat Completions API endpoints:
GET /v1/models- List available modelsPOST /v1/chat/completions- Create chat completions with:- Streaming support
- Tool/function calling (for models with
supports_tools=True) - Structured output via
response_format(for models withsupports_schema=True) - Conversation history with tool results
Features
Basic Usage
from openai import OpenAI
# Point the client to your local llm-server
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="not-needed" # API key is not required for local server
)
# Use any model available in your llm CLI
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
]
)
print(response.choices[0].message.content)
Streaming is also supported
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Tool/Function Calling
Models that support tools (indicated by supports_tools=True) can use OpenAI-compatible function calling:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="not-needed" # API key is not required for local server
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What's the weather in San Francisco?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}]
)
Structured Output with Schema
Models that support schema (indicated by supports_schema=True) can generate structured JSON output:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="not-needed"
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Generate a person's profile"}],
response_format={
"type": "json_schema",
"json_schema": {
"name": "person",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"email": {"type": "string"}
},
"required": ["name", "age", "email"]
}
}
}
)
# The response will contain valid JSON matching the schema
print(response.choices[0].message.content)
Testing
Run the test script to verify the OpenAI-compliant API:
python -m pytest tests/test_openai_api.py
Development
Prerequisites
- Python 3.9+
llmCLI tool installed- One or more LLM models configured in
llm
Code Quality
Format code:
ruff format .
Lint code:
ruff check --fix .
Running Tests
Run all tests:
pytest
Configuration
The server integrates with the llm CLI tool's configuration.
Make sure you follow the setup instructions.
- Installed and configured
llmwith your preferred models - Set up any necessary API keys for cloud-based models
- Verified models are available with
llm models
Supported API Specifications
Currently Implemented
- OpenAI Chat Completions API (
/v1/chat/completions)- Compatible with OpenAI Python/JavaScript SDKs
- Works tools expecting OpenAI format
- Full support for streaming, tool calling, and structured output
Help Wanted
- OpenAI Responses API (
/v1/responses) - Anthropic Messages API (
/v1/messages)
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_api_server-0.1.0.tar.gz.
File metadata
- Download URL: llm_api_server-0.1.0.tar.gz
- Upload date:
- Size: 14.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ba258ca5f75ee17386b4667ce5e9db7b3b08b4305cbf053315c165bfe352827a
|
|
| MD5 |
09e72c0d00c0373dd3fce8499463796e
|
|
| BLAKE2b-256 |
5f36ea994ae6989d75ea667289109bac1cc2a411bcd998910f1ff09ee799b1d6
|
File details
Details for the file llm_api_server-0.1.0-py3-none-any.whl.
File metadata
- Download URL: llm_api_server-0.1.0-py3-none-any.whl
- Upload date:
- Size: 23.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
433ce5c9a390c56a73f839d7409464f3f5462c7245f47a7a8b223d59c583831c
|
|
| MD5 |
504b6639614af816279760ca84ea60fa
|
|
| BLAKE2b-256 |
e92a390ea750aba255db90f20593ea2b5c40acd5ba741a754d6ce942435880a9
|