Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.
Project description
Local-LLM
Local-LLM is a simple llama.cpp server that easily exposes a list of local language models to choose from to run on your own computer. It is designed to be as easy as possible to get started with running local models. It automatically handles downloading the model of your choice and configuring the server based on your CPU, RAM, and GPU. It also includes OpenAI Style endpoints for easy integration with other applications.
Prerequisites
- Git
- PowerShell 7.X
- Docker Desktop (Windows or Mac)
Additional Linux Prerequisites
Installation
git clone https://github.com/Josh-XT/Local-LLM
cd Local-LLM
Expand Environment Setup if you would like to modify the default environment variables, otherwise skip to Usage.
Environment Setup (Optional)
None of the values need modified in order to run the server. If you are using an NVIDIA GPU, I would recommend setting the GPU_LAYERS
and MAIN_GPU
environment variables. If you plan to expose the server to the internet, I would recommend setting the LOCAL_LLM_API_KEY
environment variable for security. THREADS
is set to your CPU thread count minus 2 by default, if this causes significant performance issues, consider setting the THREADS
environment variable manually to a lower number.
Modify the .env
file to your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.
Replace the environment variables with your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.
LOCAL_LLM_API_KEY
- The API key to use for the server. If not set, the server will not require an API key when accepting requests.AUTO_UPDATE
- Whether or not to automatically update Local-LLM. Default istrue
.THREADS
- The number of CPU threads Local-LLM is allowed to use. Default isyour CPU thread count minus 2
.GPU_LAYERS
(Only applicable to NVIDIA GPU) - The number of layers to use on the GPU. Default is0
.MAIN_GPU
(Only applicable to NVIDIA GPU) - The GPU to use for the main model. Default is0
.
Usage
./start.ps1
For examples on how to use the server to communicate with the models, see the Examples Jupyter Notebook.
OpenAI Style Endpoint Usage
OpenAI Style endpoints available at http://<YOUR LOCAL IP ADDRESS>:8091/v1/
by default. Documentation can be accessed at that http://localhost:8091 when the server is running.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file local-llm-0.0.46.tar.gz
.
File metadata
- Download URL: local-llm-0.0.46.tar.gz
- Upload date:
- Size: 12.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | da4ae54c8ecb0423ab4f919f5e1721382ff8f99e4c2d273770a6d4239cc8b7f4 |
|
MD5 | 8d5a3ab0c9a94e62bb130afceb849286 |
|
BLAKE2b-256 | 7881cf61d176c5d4b59265be1cca311fe62c9a664945c39b4acadd27c700c602 |
File details
Details for the file local_llm-0.0.46-py3-none-any.whl
.
File metadata
- Download URL: local_llm-0.0.46-py3-none-any.whl
- Upload date:
- Size: 7.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d655bbe3c4403237a8f3c336db8c2463b40785d752c8def67cdaa2e4366915b8 |
|
MD5 | 20edbe07d3f177ac329b63e17f4cda4b |
|
BLAKE2b-256 | 1e8fdaf59a3cbcec24de052af4f3b0d7151db452a40ac8677c9ad326ead5704d |