Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.
Project description
Local-LLM
Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints that allows you to send the model name as the name of the model as it appears in the model list, for example Mistral-7B-OpenOrca
. It will automatically download the model from Hugging Face if it isn't already downloaded and configure the server for you. It automatically configures the server based on your CPU, RAM, and GPU. It is designed to be as easy as possible to get started with running local models.
Table of Contents 📖
Environment Setup
Modify the .env
file to your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.
Replace the environment variables with your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.
LOCAL_LLM_API_KEY
- The API key to use for the server. If not set, the server will not require an API key.THREADS
- The number of threads to use. Default isyour CPU core count minus 1
.
The following are only applicable to NVIDIA GPUs:
GPU_LAYERS
- The number of layers to use on the GPU. Default is0
.MAIN_GPU
- The GPU to use for the main model. Default is0
.
Run Local-LLM
You can choose to run locally with the instructions below, or with Docker. Both are not needed. Instructions to run with Docker or Docker Compose can be found here.
Prerequisites
Installation
git clone https://github.com/Josh-XT/Local-LLM
cd Local-LLM
pip install -r requirements.txt
Usage
Make your modifications to the .env
file or proceed to accept defaults running on CPU without an API key.
uvicorn app:app --host 0.0.0.0 --port 8091 --workers 4
OpenAI Style Endpoint Usage
OpenAI Style endpoints available at http://<YOUR LOCAL IP ADDRESS>:8091/v1
by default. Documentation can be accessed at that http://localhost:8091 when the server is running. There are examples for each of the endpoints in the Examples Jupyter Notebook.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file local-llm-0.0.44.tar.gz
.
File metadata
- Download URL: local-llm-0.0.44.tar.gz
- Upload date:
- Size: 12.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5aa6070ba957e77c8df58f6d48064cdfd1ccf79736f30d38d43ed74493100a44 |
|
MD5 | f486a7ffc4137323cc1d24f68d7f8af4 |
|
BLAKE2b-256 | d4fd4a45b9877f906b5666643ab58e2b963dd0be1a178d166ec18f7d26007626 |
File details
Details for the file local_llm-0.0.44-py3-none-any.whl
.
File metadata
- Download URL: local_llm-0.0.44-py3-none-any.whl
- Upload date:
- Size: 6.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0b75e912e5c1308bdb3bfe1bca0098a934a0e160561c10bdcd45129b205c3168 |
|
MD5 | f640141b21579c2ba4b6b4ef5c2735db |
|
BLAKE2b-256 | 83bcee6d1e092c378ca40558104eafc43a9426ba78e925b7229612099a3ce5a5 |