Skip to main content

Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.

Project description

Local-LLM

Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints that allows you to send the model name as the name of the model as it appears in the model list, for example Mistral-7B-OpenOrca. It will automatically download the model from Hugging Face if it isn't already downloaded and configure the server for you. It automatically configures the server based on your CPU, RAM, and GPU. It is designed to be as easy as possible to get started with running local models.

Table of Contents 📖

Environment Setup

Modify the .env file to your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

Replace the environment variables with your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

  • LOCAL_LLM_API_KEY - The API key to use for the server. If not set, the server will not require an API key.
  • THREADS - The number of threads to use. Default is your CPU core count minus 1.

The following are only applicable to NVIDIA GPUs:

  • GPU_LAYERS - The number of layers to use on the GPU. Default is 0.
  • MAIN_GPU - The GPU to use for the main model. Default is 0.

Run Local-LLM

You can choose to run locally with the instructions below, or with Docker. Both are not needed. Instructions to run with Docker or Docker Compose can be found here.

Prerequisites

Installation

git clone https://github.com/Josh-XT/Local-LLM
cd Local-LLM
pip install -r requirements.txt

Usage

Make your modifications to the .env file or proceed to accept defaults running on CPU without an API key.

uvicorn app:app --host 0.0.0.0 --port 8091 --workers 4

OpenAI Style Endpoint Usage

OpenAI Style endpoints available at http://<YOUR LOCAL IP ADDRESS>:8091/v1 by default. Documentation can be accessed at that http://localhost:8091 when the server is running. There are examples for each of the endpoints in the Examples Jupyter Notebook.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

local-llm-0.0.43.tar.gz (60.1 kB view details)

Uploaded Source

Built Distribution

local_llm-0.0.43-py3-none-any.whl (6.6 kB view details)

Uploaded Python 3

File details

Details for the file local-llm-0.0.43.tar.gz.

File metadata

  • Download URL: local-llm-0.0.43.tar.gz
  • Upload date:
  • Size: 60.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for local-llm-0.0.43.tar.gz
Algorithm Hash digest
SHA256 ad0a59da53d2b10168505afd12943d23b7c06fe0463daa0be5c8c9486c8c55c1
MD5 23b8baeec831f7117b1a22b9d2651125
BLAKE2b-256 b17cd3a4b5dfa85a3ce482f8658615c8da1e77226568a37852eb107567fa2fcc

See more details on using hashes here.

File details

Details for the file local_llm-0.0.43-py3-none-any.whl.

File metadata

  • Download URL: local_llm-0.0.43-py3-none-any.whl
  • Upload date:
  • Size: 6.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for local_llm-0.0.43-py3-none-any.whl
Algorithm Hash digest
SHA256 e2a9eb0e1f6455c3c44b62b502ad4f8a3949bb0b088b2c17ca9bed6412712d3b
MD5 cea2137ada0a6c539b3b43e485796f21
BLAKE2b-256 5943862f39891702bc587b1acae6475881448b338e3aeaa1708979cd2b9282fc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page