Skip to main content

Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.

Project description

Local-LLM

GitHub Dockerhub

Local-LLM is a simple llama.cpp server that easily exposes a list of local language models to choose from to run on your own computer. It is designed to be as easy as possible to get started with running local models. It automatically handles downloading the model of your choice and configuring the server based on your CPU, RAM, and GPU. It also includes OpenAI Style endpoints for easy integration with other applications.

Prerequisites

Additional Linux Prerequisites

Installation

git clone https://github.com/Josh-XT/Local-LLM
cd Local-LLM

Expand Environment Setup if you would like to modify the default environment variables, otherwise skip to Usage.

Environment Setup (Optional)

None of the values need modified in order to run the server. If you are using an NVIDIA GPU, I would recommend setting the GPU_LAYERS and MAIN_GPU environment variables. If you plan to expose the server to the internet, I would recommend setting the LOCAL_LLM_API_KEY environment variable for security. THREADS is set to your CPU thread count minus 2 by default, if this causes significant performance issues, consider setting the THREADS environment variable manually to a lower number.

Modify the .env file to your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

Replace the environment variables with your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

  • LOCAL_LLM_API_KEY - The API key to use for the server. If not set, the server will not require an API key when accepting requests.
  • DEFAULT_MODEL - The default model to use when no model is specified. Default is phi-2-dpo.
  • MULTI_SERVER - This will run two servers, one with zephyr-7b-beta running on GPU, and one with phi-2-dpo running on CPU. If set, this will run both, otherwise it will only run one server.
  • AUTO_UPDATE - Whether or not to automatically update Local-LLM. Default is true.
  • THREADS - The number of CPU threads Local-LLM is allowed to use. Default is your CPU thread count minus 2.
  • GPU_LAYERS (Only applicable to NVIDIA GPU) - The number of layers to use on the GPU. Default is 0.
  • MAIN_GPU (Only applicable to NVIDIA GPU) - The GPU to use for the main model. Default is 0.

Usage

./start.ps1

For examples on how to use the server to communicate with the models, see the Examples Jupyter Notebook.

OpenAI Style Endpoint Usage

OpenAI Style endpoints available at http://<YOUR LOCAL IP ADDRESS>:8091/v1/ by default. Documentation can be accessed at that http://localhost:8091 when the server is running.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

local-llm-0.1.0.tar.gz (16.8 kB view details)

Uploaded Source

Built Distribution

local_llm-0.1.0-py3-none-any.whl (10.3 kB view details)

Uploaded Python 3

File details

Details for the file local-llm-0.1.0.tar.gz.

File metadata

  • Download URL: local-llm-0.1.0.tar.gz
  • Upload date:
  • Size: 16.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for local-llm-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e6fb0903997f9d3aa3f1e08e2338e4224428b5b6f6c35c5208101a243d32e0e0
MD5 e4796c9cbecac717dba50e057ca9045f
BLAKE2b-256 b3a69abead06f9bb3e74112b68f3ec2fa986ab6ece4c0a3313e6e86928f446ae

See more details on using hashes here.

File details

Details for the file local_llm-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: local_llm-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 10.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for local_llm-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e3dde679402ca3cf83ab93a679a8f29cf2ee84b124bda05186f74d2dad7852b4
MD5 af50f03d73617c5b57c0a8205e795104
BLAKE2b-256 0483e52939e97f0ee3c1228cb485c055738b6e7d35e917c3225047290a75be11

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page