Skip to main content

A minimal API server for local HuggingFace LLMs or VLLM LLMs

Project description

Minimal LLM Server, for API calls PyPl Total Downloads

The simplest possible Python code for running local LLM inference as a REST API server and a simple client.

This package lets you start an inference server for Hugging Face–compatible models (like LLaMA, Qwen, GPT-OSS, etc.) on your own computer or server, and make it accessible to applications via HTTP. It supports both standard HuggingFace Transformers and high-performance vLLM backends.

See the Tutorial page for extented info.

Backend Options

This package now supports two inference backends:

1. HuggingFace Transformers (Standard)

  • ✓ Widely compatible
  • ✓ CPU support available
  • ✓ Smaller installation size
  • ✓ Good for development and testing

2. vLLM Optimized (High-Performance)

  • ✓ Up to 24x faster throughput than standard transformers
  • ✓ Lower latency for single requests
  • ✓ Better GPU memory utilization with PagedAttention
  • ✓ Automatic multi-GPU support with tensor parallelism
  • ✓ Continuous batching for higher throughput
  • ⚠ Requires CUDA GPUs (no CPU support)
  • ⚠ Best for production deployments

Installation by pip

Prerequisite

uv venv
source .venv/bin/activate
uv python install 3.12

Standard Installation (HuggingFace):

pip install min-llm-server-client

With vLLM Support:

pip install "min-llm-server-client[vllm]"

Installation From Source:

git clone https://github.com/afshinsadeghi/min_llm_server_client.git
cd min_llm_server_client

# Standard installation
uv pip install .

# Or with vLLM support
uv pip install ".[vllm]"

Usage

Starting the Server

Standard HuggingFace Transformers Server

min-llm-server --model_name meta-llama/Llama-3.3-70B-Instruct --max_new_tokens 100 --device cuda:0

vLLM Optimized infernce Server

min-llm-server-vllm --model_name meta-llama/Llama-3.3-70B-Instruct --max_new_tokens 100 --device auto

Command Options:

  • --model_name : Hugging Face model name or local path suggested models: openai/gpt-oss-20b openai/gpt-oss-120b meta-llama/Llama-3.3-70B-Instruct meta-llama/Llama-3.1-8B Qwen/Qwen3-0.6B Qwen/Qwen2-VL-72B-Instruct-AWQ deepseek-ai/DeepSeek-R1-Distill-Qwen-32B

    or it can use a local model on your device with /path/to/model.

  • --max_new_tokens : maximum number of tokens to generate in response.

  • --device : Device selection

    • auto - Auto-detect available GPUs (default)
    • cpu, - Force CPU (HuggingFace only, vLLM requires GPU)
    • cuda:0, cuda:1 , or a list of GPU cores: cuda:2,3,4,5,6,7.

If the device parameter is not given or is auto, it finds the available GPU cores and uses them and if no gpu is available, it uses CPU instead.

Example run:

Standard server with default settings (auto GPU detection):

min-llm-server 

Standard server on a specific GPU (e.g., GPU 0):

min-llm-server --model_name openai/gpt-oss-20b --device cuda:0

Standard server on a specific GPU (e.g., GPU 1):

min-llm-server --model_name openai/gpt-oss-120b --device cuda:1

Standard server forced on CPU:

min-llm-server --model_name openai/gpt-oss-20b --max_new_tokens 50 --device cpu

vLLM server with auto GPU detection (uses all available GPUs):

min-llm-server-vllm --model_name meta-llama/Llama-3.3-70B-Instruct

vLLM server on a specific GPU (e.g., GPU 2):

min-llm-server-vllm --model_name meta-llama/Llama-3.3-70B-Instruct --device cuda:2

Standard server on a several GPUs:

min-llm-server --model_name meta-llama/Llama-3.3-70B-Instruct --device cuda:2,3,4,5,6,7

Sending Queries

Once the server is running (default: http://127.0.0.1:5000/llm/q), you can query it with curl or Python.

Curl:

curl -X POST http://127.0.0.1:5000/llm/q \
  -H "Content-Type: application/json" \
  -d '{"query": "What is Earth?", "key": "key1"}'

Python client:

from min_llm_server_client.local_llm_inference_api_client import send_query

response = send_query("What is the capital of France?", user="user1", key="key1")
print(response)

Performance Comparison

LLaMA 3.1 8B - Standard HuggingFace Backend:

  • Intel CPU → ~30 seconds per request, ~2.4 GB RAM
  • A100 GPU → <1 second per request, ~34 GB GPU memory, ~4.8 GB CPU RAM

LLaMA 3.1 8B - vLLM Optimized Backend:

  • A100 GPU → ~0.1-0.3 seconds per request (3-10x faster)
  • Better memory efficiency with PagedAttention
  • Supports higher concurrent request throughput

Performance Tips:

  • Use vLLM for production deployments with high request volumes
  • Use standard backend for development, testing, or CPU-only environments
  • Both the deployement method based on Hugging face and vLLM automatically utilize multiple GPUs, vLLM with tensor parallelism
  • Both backends support the same API, making it easy to switch

Project Structure

min_llm_server_client/
├── src/
│   ├── local_llm_inference_api_client.py
│   ├── local_llm_inference_server_api.py
│   └── ...
└── README.md

License

This project is open source under the Apache 2.0 License.


Author

Afshin Sadeghi
🔗 GitHub
🔗 Google Scholar
🔗 LinkedIn

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

min_llm_server_client-0.4.1.tar.gz (15.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

min_llm_server_client-0.4.1-py3-none-any.whl (15.3 kB view details)

Uploaded Python 3

File details

Details for the file min_llm_server_client-0.4.1.tar.gz.

File metadata

  • Download URL: min_llm_server_client-0.4.1.tar.gz
  • Upload date:
  • Size: 15.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for min_llm_server_client-0.4.1.tar.gz
Algorithm Hash digest
SHA256 34e7ce4f14906794601b5eb6447b29410b7471d012fece9304134f65c1b1b310
MD5 2bf9148a758a1c1d651ac25c5ac83835
BLAKE2b-256 eb15c15643cd8a01213bfa8e1042a83d1f6ec81d8eced5d3f133e571f7cddde0

See more details on using hashes here.

File details

Details for the file min_llm_server_client-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for min_llm_server_client-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 40fc530539f5115fd7db7ffd9fda221eec933e6910d7f9de3489d8e366d1aedb
MD5 4935abd0d44248ba5e1025d674f7e119
BLAKE2b-256 8aad3fc2ee26a64cc9c8495c9d338b3547d05751c3fe1aa330001faf8840cd47

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page