Skip to main content

Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.

Project description

Local-LLM

Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints that allows you to send the model name as the name of the model as it appears in the model list, for example Mistral-7B-OpenOrca. It will automatically download the model from Hugging Face if it isn't already downloaded and configure the server for you. It automatically configures the server based on your CPU, RAM, and GPU. It is designed to be as easy as possible to get started with running local models.

Table of Contents 📖

Run with Docker

You can choose to run with Docker or Docker Compose. Both are not needed. Instructions to run with Docker Compose can be found here.

Replace the environment variables with your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

  • LOCAL_LLM_API_KEY - The API key to use for the server. If not set, the server will not require an API key.
  • THREADS - The number of threads to use. Default is your CPU core count minus 1.

The following are only applicable to NVIDIA GPUs:

  • GPU_LAYERS - The number of layers to use on the GPU. Default is 0.
  • MAIN_GPU - The GPU to use for the main model. Default is 0.

Prerequisites

Run without NVIDIA GPU support

Modify the THREADS environment variable to your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

docker pull joshxt/local-llm:cpu
docker run -d --name local-llm -p 8091:8091 joshxt/local-llm:cpu -e THREADS="10" -e LOCAL_LLM_API_KEY="" -v ./models:/app/models

Run with NVIDIA GPU support

If you're using an NVIDIA GPU, you can use the CUDA version of the server. You must have the NVIDIA Container Toolkit installed if using NVIDIA GPU.

Modify the GPU_LAYERS, MAIN_GPU, and THREADS environment variables to your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

docker pull joshxt/local-llm:cuda
docker run -d --name local-llm -p 8091:8091 --gpus all joshxt/local-llm:cuda -e THREADS="10" -e GPU_LAYERS="20" -e MAIN_GPU="0" -e LOCAL_LLM_API_KEY="" -v ./models:/app/models

Run with Docker Compose

You can choose to run with Docker Compose or Docker. Both are not needed.

Update the .env file with your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

Run without NVIDIA GPU support with Docker Compose

docker-compose pull
docker-compose up

Run with NVIDIA GPU support with Docker Compose

docker-compose -f docker-compose-cuda.yml pull
docker-compose -f docker-compose-cuda.yml up

OpenAI Style Endpoint Usage

OpenAI Style endpoints available at http://<YOUR LOCAL IP ADDRESS>:8091/v1 by default. Documentation can be accessed at that http://localhost:8091 when the server is running. There are examples for each of the endpoints in the Examples Jupyter Notebook.

Shout Outs

  • ggerganov/llama.cpp - For constantly improving the ability for anyone to run local models. It is one of my favorite and most exciting projects on GitHub.
  • abetlen/llama-cpp-python - For making it easy to extend the functionality of llama.cpp in Python.
  • TheBloke - For helping enable the ability to run local models by quantizing them and sharing them with a great readme on how to use them in every repository.
  • Meta - For the absolutely earth shattering open source releases of the LLaMa models and all other contributions they have made to Open Source.
  • OpenAI - For setting good standards for endpoints and making great models.
  • Hugging Face - For making it easy to use and share models.
  • As much as I hate to do it, I can't list all of the amazing people building and fine tuning local models, but you know who you are. Thank you for all of your hard work and contributions to the community!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

local-llm-0.0.25.tar.gz (61.1 kB view details)

Uploaded Source

Built Distribution

local_llm-0.0.25-py3-none-any.whl (7.2 kB view details)

Uploaded Python 3

File details

Details for the file local-llm-0.0.25.tar.gz.

File metadata

  • Download URL: local-llm-0.0.25.tar.gz
  • Upload date:
  • Size: 61.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for local-llm-0.0.25.tar.gz
Algorithm Hash digest
SHA256 f5acd3a66e480a070a2235b4fc56df38f1dd3023a23d9504565c474060831a4d
MD5 0b38d74b497039d74e259c085a5874aa
BLAKE2b-256 52028a24bf2a20ce10e976a0aa151d9114ec44c2a5b8dba9311a1c6753f16cd9

See more details on using hashes here.

File details

Details for the file local_llm-0.0.25-py3-none-any.whl.

File metadata

  • Download URL: local_llm-0.0.25-py3-none-any.whl
  • Upload date:
  • Size: 7.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for local_llm-0.0.25-py3-none-any.whl
Algorithm Hash digest
SHA256 87f2bbf96c899b04f1cd5620290ea2912f9ea0d4abeed1a1bc53b10b7c1ec500
MD5 bfacc5c9b1807d1282d4e288631111ab
BLAKE2b-256 54d609a0d05769e6cf00ea5b9c2355ff00436799e477f4eac58fa53ea83eb1aa

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page