Skip to main content

Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.

Project description

Local-LLM

Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints that allows you to send the model name as the name of the model as it appears in the model list, for example Mistral-7B-OpenOrca. It will automatically download the model from Hugging Face if it isn't already downloaded and configure the server for you. It automatically configures the server based on your CPU, RAM, and GPU. It is designed to be as easy as possible to get started with running local models.

Table of Contents 📖

Run with Docker

You can choose to run with Docker or Docker Compose. Both are not needed. Instructions to run with Docker Compose can be found here.

Replace the environment variables with your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

  • LOCAL_LLM_API_KEY - The API key to use for the server. If not set, the server will not require an API key.
  • THREADS - The number of threads to use. Default is your CPU core count minus 1.
  • BATCH_SIZE - The batch size to use for batch generation. Default is 512.

The following are only applicable to NVIDIA GPUs:

  • GPU_LAYERS - The number of layers to use on the GPU. Default is 0.
  • MAIN_GPU - The GPU to use for the main model. Default is 0.

Prerequisites

Run without NVIDIA GPU support

docker pull joshxt/local-llm:cpu
docker run -d --name local-llm -p 8091:8091 joshxt/local-llm:cpu -e THREADS="10" -e BATCH_SIZE="512" -e LOCAL_LLM_API_KEY=""

Run with NVIDIA GPU support

If you're using an NVIDIA GPU, you can use the CUDA version of the server.

docker pull joshxt/local-llm:cuda
docker run -d --name local-llm -p 8091:8091 --gpus all joshxt/local-llm:cuda -e THREADS="10" -e BATCH_SIZE="512" -e GPU_LAYERS="0" -e MAIN_GPU="0" -e LOCAL_LLM_API_KEY=""

OpenAI Style Endpoint Usage

OpenAI Style endpoints available at http://localhost:8091/ by default. Documentation can be accessed at that url when the server is running. There are examples for each of the endpoints in the Examples Jupyter Notebook.

Shout Outs

  • ggerganov/llama.cpp - For constantly improving the ability for anyone to run local models. It is one of my favorite and most exciting projects on GitHub.
  • abetlen/llama-cpp-python - For making it easy to extend the functionality of llama.cpp in Python.
  • TheBloke - For helping enable the ability to run local models by quantizing them and sharing them with a great readme on how to use them in every repository.
  • Meta - For the absolutely earth shattering open source releases of the LLaMa models and all other contributions they have made to Open Source.
  • OpenAI - For setting good standards for endpoints and making great models.
  • Hugging Face - For making it easy to use and share models.
  • As much as I hate to do it, I can't list all of the amazing people building and fine tuning local models, but you know who you are. Thank you for all of your hard work and contributions to the community!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

local-llm-0.0.13.tar.gz (58.6 kB view details)

Uploaded Source

Built Distribution

local_llm-0.0.13-py3-none-any.whl (6.9 kB view details)

Uploaded Python 3

File details

Details for the file local-llm-0.0.13.tar.gz.

File metadata

  • Download URL: local-llm-0.0.13.tar.gz
  • Upload date:
  • Size: 58.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for local-llm-0.0.13.tar.gz
Algorithm Hash digest
SHA256 8a1660727af96c9107e06ca8515347a15611ac273545c5001156ccf6965247e2
MD5 e540fe97b62b139fe0a134c34551a4c8
BLAKE2b-256 3803a65e1be23e9822cf54e764e5fa7d52fad0d45f06d8dee9d4271fcca6703e

See more details on using hashes here.

File details

Details for the file local_llm-0.0.13-py3-none-any.whl.

File metadata

  • Download URL: local_llm-0.0.13-py3-none-any.whl
  • Upload date:
  • Size: 6.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for local_llm-0.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 2179dbbe9689f451818476f0d79074178b8653d507448d33a918b793242ed323
MD5 2268302cb8b9f2f26b63f070729d9d60
BLAKE2b-256 932c95068331281a1456196f1d642a4c0439704b9cfd715d1684bd426bba6943

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page