Skip to main content

Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.

Project description

Local-LLM

Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints that allows you to send the model name as the name of the model as it appears in the model list, for example Mistral-7B-OpenOrca. It will automatically download the model from Hugging Face if it isn't already downloaded and configure the server for you. It automatically configures the server based on your CPU, RAM, and GPU. It is designed to be as easy as possible to get started with running local models.

Table of Contents 📖

Run with Docker

You can choose to run with Docker or Docker Compose. Both are not needed. Instructions to run with Docker Compose can be found here.

Replace the environment variables with your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

  • LOCAL_LLM_API_KEY - The API key to use for the server. If not set, the server will not require an API key.
  • THREADS - The number of threads to use. Default is your CPU core count minus 1.
  • BATCH_SIZE - The batch size to use for batch generation. Default is 512.

The following are only applicable to NVIDIA GPUs:

  • GPU_LAYERS - The number of layers to use on the GPU. Default is 0.
  • MAIN_GPU - The GPU to use for the main model. Default is 0.

Prerequisites

Run without NVIDIA GPU support

docker pull joshxt/local-llm:cpu
docker run -d --name local-llm -p 8091:8091 joshxt/local-llm:cpu -e THREADS="10" -e BATCH_SIZE="512" -e LOCAL_LLM_API_KEY=""

Run with NVIDIA GPU support

If you're using an NVIDIA GPU, you can use the CUDA version of the server.

docker pull joshxt/local-llm:cuda
docker run -d --name local-llm -p 8091:8091 --gpus all joshxt/local-llm:cuda -e THREADS="10" -e BATCH_SIZE="512" -e GPU_LAYERS="0" -e MAIN_GPU="0" -e LOCAL_LLM_API_KEY=""

OpenAI Style Endpoint Usage

OpenAI Style endpoints available at http://localhost:8091/ by default. Documentation can be accessed at that url when the server is running. There are examples for each of the endpoints in the Examples Jupyter Notebook.

Shout Outs

  • ggerganov/llama.cpp - For constantly improving the ability for anyone to run local models. It is one of my favorite and most exciting projects on GitHub.
  • abetlen/llama-cpp-python - For making it easy to extend the functionality of llama.cpp in Python.
  • TheBloke - For helping enable the ability to run local models by quantizing them and sharing them with a great readme on how to use them in every repository.
  • Meta - For the absolutely earth shattering open source releases of the LLaMa models and all other contributions they have made to Open Source.
  • OpenAI - For setting good standards for endpoints and making great models.
  • Hugging Face - For making it easy to use and share models.
  • As much as I hate to do it, I can't list all of the amazing people building and fine tuning local models, but you know who you are. Thank you for all of your hard work and contributions to the community!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

local-llm-0.0.9.tar.gz (58.5 kB view details)

Uploaded Source

Built Distribution

local_llm-0.0.9-py3-none-any.whl (6.8 kB view details)

Uploaded Python 3

File details

Details for the file local-llm-0.0.9.tar.gz.

File metadata

  • Download URL: local-llm-0.0.9.tar.gz
  • Upload date:
  • Size: 58.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for local-llm-0.0.9.tar.gz
Algorithm Hash digest
SHA256 c7b33daed42c6e09dc4704436556ebea8d06655c7fa37e5956d6ab1e803efaee
MD5 688eefd3204b8d6b49438436a2fdadb0
BLAKE2b-256 061324e5a58bb880eb45d96cff6d2f3fb5564ae7a3a93170287e53a6e57c2faf

See more details on using hashes here.

File details

Details for the file local_llm-0.0.9-py3-none-any.whl.

File metadata

  • Download URL: local_llm-0.0.9-py3-none-any.whl
  • Upload date:
  • Size: 6.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for local_llm-0.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 216acd7f054080b2a3b6a753a23386acb4a3f3b2487ccb17b9ccf915968d3b02
MD5 4654b3e67df0246501d3174fbe674b3e
BLAKE2b-256 d52aa970fb202d04dfe8989515038117d407243820dceb86504c51f66d7d7b8e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page