Skip to main content

A tool for running on-premises large language models on non-public data

Project description

OnPrem.LLM

A tool for running large language models on-premises using non-public data

OnPrem.LLM is a simple Python package that makes it easier to run large language models (LLMs) on your own machines using non-public data (possibly behind corporate firewalls). Inspired largely by the privateGPT GitHub repo, OnPrem.LLM is intended to help integrate local LLMs into practical applications.

The full documentation is here.

A Google Colab demo of installing and using OnPrem.LLM is here.

Install

Once you have installed PyTorch and installed llama-cpp-python, you can install OnPrem.LLM with:

pip install onprem

For fast GPU-accelerated inference, see additional instructions below. See the FAQ, if you experience issues with llama-cpp-python installation.

Note: The pip install onprem command will install PyTorch and llama-cpp-python automatically if not already installed, but we recommend visting the links above to install these packages in a way that is optimized for your system (e.g., with GPU support).

How to Use

Setup

from onprem import LLM

llm = LLM()

By default, a 7B-parameter model is downloaded and used. If use_larger=True, a 13B-parameter is used. You can also supply the URL to an LLM of your choosing to LLM (see the code generation section below for an example). Any extra parameters supplied to LLM are forwarded directly to llama-cpp-python. As of v0.0.20, OnPrem.LLM supports the newer GGUF format.

Send Prompts to the LLM to Solve Problems

This is an example of few-shot prompting, where we provide an example of what we want the LLM to do.

prompt = """Extract the names of people in the supplied sentences. Here is an example:
Sentence: James Gandolfini and Paul Newman were great actors.
People:
James Gandolfini, Paul Newman
Sentence:
I like Cillian Murphy's acting. Florence Pugh is great, too.
People:"""

saved_output = llm.prompt(prompt)
Cillian Murphy, Florence Pugh

Additional prompt examples are shown here.

Talk to Your Documents

Answers are generated from the content of your documents (i.e., retrieval augmented generation or RAG). Here, we will supply use_larger=True to use the larger default model better suited to this use case in addition to using GPU offloading to speed up answer generation.

from onprem import LLM

llm = LLM(use_larger=True, n_gpu_layers=35)

Step 1: Ingest the Documents into a Vector Database

llm.ingest("./sample_data")
Creating new vectorstore at /home/amaiya/onprem_data/vectordb
Loading documents from ./sample_data
Loaded 12 new documents from ./sample_data
Split into 153 chunks of text (max. 500 chars each)
Creating embeddings. May take some minutes...
Ingestion complete! You can now query your documents using the LLM.ask method

Loading new documents: 100%|██████████████████████| 3/3 [00:00<00:00, 25.52it/s]

Step 2: Answer Questions About the Documents

question = """What is  ktrain?"""
result = llm.ask(question)
 ktrain is a low-code platform designed to facilitate the full machine learning workflow, from preprocessing inputs to training, tuning, troubleshooting, and applying models. It focuses on automating other aspects of the ML workflow in order to augment and complement human engineers rather than replacing them. Inspired by fastai and ludwig, ktrain is intended to democratize machine learning for beginners and domain experts with minimal programming or data science experience.

The sources used by the model to generate the answer are stored in result['source_documents']:

print("\nSources:\n")
for i, document in enumerate(result["source_documents"]):
    print(f"\n{i+1}.> " + document.metadata["source"] + ":")
    print(document.page_content)
Sources:


1.> ./sample_data/ktrain_paper.pdf:
lection (He et al., 2019). By contrast, ktrain places less emphasis on this aspect of au-
tomation and instead focuses on either partially or fully automating other aspects of the
machine learning (ML) workflow. For these reasons, ktrain is less of a traditional Au-
2

2.> ./sample_data/ktrain_paper.pdf:
possible, ktrain automates (either algorithmically or through setting well-performing de-
faults), but also allows users to make choices that best fit their unique application require-
ments. In this way, ktrain uses automation to augment and complement human engineers
rather than attempting to entirely replace them. In doing so, the strengths of both are
better exploited. Following inspiration from a blog post1 by Rachel Thomas of fast.ai

3.> ./sample_data/ktrain_paper.pdf:
with custom models and data formats, as well.
Inspired by other low-code (and no-
code) open-source ML libraries such as fastai (Howard and Gugger, 2020) and ludwig
(Molino et al., 2019), ktrain is intended to help further democratize machine learning by
enabling beginners and domain experts with minimal programming or data science experi-
4. http://archive.ics.uci.edu/ml/datasets/Twenty+Newsgroups
6

4.> ./sample_data/ktrain_paper.pdf:
ktrain: A Low-Code Library for Augmented Machine Learning
toML platform and more of what might be called a “low-code” ML platform. Through
automation or semi-automation, ktrain facilitates the full machine learning workflow from
curating and preprocessing inputs (i.e., ground-truth-labeled training data) to training,
tuning, troubleshooting, and applying models. In this way, ktrain is well-suited for domain
experts who may have less experience with machine learning and software coding. Where

Guided Prompts

You can use OnPrem.LLM with the Guidance package to guide the LLM to generate outputs based on your conditions and constraints. We’ll show a couple of examples here, but see our documentation on guided prompts for more information.

Structured Outputs with onprem.guider.Guider

Here, we’ll use a Guiderinstance to generate fictional D&D-type characters that conform to the precise structure we want (i.e., JSON):

from onprem.guider import Guider
guider = Guider(llm)
# create the Guider instance
from onprem.guider import Guider
from guidance import gen, select
guider = Guider(llm)

# this is a function that generates a Guidance prompt that will be fed to Guider
sample_weapons = ["sword", "axe", "mace", "spear", "bow", "crossbow"]
sample_armour = ["leather", "chainmail", "plate"]
def generate_character_prompt(
    character_one_liner,
    weapons: list[str] = sample_weapons,
    armour: list[str] = sample_armour,
    n_items: int = 3
):
    prompt = ''
    prompt += "{"
    prompt += f'"description" : "{character_one_liner}",'
    prompt += '"name" : "' + gen(name="character_name", stop='"') + '",'
    prompt += '"age" : ' + gen(name="age", regex="[0-9]+") + ','
    prompt += '"armour" : "' + select(armour, name="armour") + '",'
    prompt += '"weapon" : "' + select(weapons, name="weapon") + '",'
    prompt += '"class" : "' + gen(name="character_class", stop='"') + '",'
    prompt += '"mantra" : "' + gen(name="mantra", stop='"') + '",'
    prompt += '"strength" : ' + gen(name="age", regex="[0-9]+") + ','
    prompt += '"quest_items" : [ '
    for i in range(n_items):
        prompt += '"' + gen(name="items", list_append=True, stop='"') + '"'  
        if i < n_items - 1:
            prompt += ','
    prompt += "]"
    prompt += "}"
    return prompt
# feed prompt to Guider and extract JSON
import json
d = guider.prompt(generate_character_prompt("A quick and nimble fighter"), echo=False)
print('Generated JSON:')
print(json.dumps(d, indent=4))
Generated JSON:
{
    "items": [
        "Quest Item 3",
        "Quest Item 2",
        "Quest Item 1"
    ],
    "age": "10",
    "mantra": "I am the blade of justice.",
    "character_class": "fighter",
    "weapon": "sword",
    "armour": "leather",
    "character_name": "Katana"
}

Using Regular Expressions to Control LLM Generation

prompt = f"""Question: Luke has ten balls. He gives three to his brother. How many balls does he have left?
Answer: """ + gen(name='answer', regex='\d+')

guider.prompt(prompt, echo=False)
{'answer': '7'}
prompt = '19, 18,' + gen(name='output', max_tokens=50, stop_regex='[^\d]7[^\d]')
guider.prompt(prompt)
19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8,
{'output': ' 17, 16, 15, 14, 13, 12, 11, 10, 9, 8,'}

See the documentation for more examples of how to use Guidance with OnPrem.LLM.

Summarization Pipeline

Summarize your raw documents (e.g., PDFs, MS Word) with an LLM.

from onprem import LLM
llm = LLM(n_gpu_layers=35, verbose=False, mute_stream=True) # disabling viewing of intermediate summarization prompts/inferences
from onprem.pipelines import Summarizer
summ = Summarizer(llm)

text = summ.summarize('sample_data/1/ktrain_paper.pdf', max_chunks_to_use=5) # omit max_chunks_to_use parameter to consider entire document
print(text)
 The KTrain library provides an easy-to-use framework for building and training machine learning models using low-code techniques for various data types (text, image, graph, tabular) and tasks (classification, regression). It can be used to fine-tune pretrained models in text classification and image classification tasks respectively. Additionally, it reduces cognitive load by providing a unified interface to various and disparate machine learning tasks, allowing users to focus on more important tasks that may require domain expertise or are less amenable to automation.

Text to Code Generation

We’ll use the CodeUp LLM by supplying the URL and employing the particular prompt format this model expects.

from onprem import LLM

url = "https://huggingface.co/TheBloke/CodeUp-Llama-2-13B-Chat-HF-GGUF/resolve/main/codeup-llama-2-13b-chat-hf.Q4_K_M.gguf"
llm = LLM(url, n_gpu_layers=43)  # see below for GPU information

Setup the prompt based on what this model expects (this is important):

template = """
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:"""
answer = llm.prompt(
    "Write Python code to validate an email address.", prompt_template=template
)
Here is an example of Python code that can be used to validate an email address:
```
import re

def validate_email(email):
    # Use a regular expression to check if the email address is in the correct format
    pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
    if re.match(pattern, email):
        return True
    else:
        return False

# Test the validate_email function with different inputs
print("Email address is valid:", validate_email("example@example.com"))  # Should print "True"
print("Email address is invalid:", validate_email("example@"))  # Should print "False"
print("Email address is invalid:", validate_email("example.com"))  # Should print "False"
```
The code defines a function `validate_email` that takes an email address as input and uses a regular expression to check if the email address is in the correct format. The regular expression checks for an email address that consists of one or more letters, numbers, periods, hyphens, or underscores followed by the `@` symbol, followed by one or more letters, periods, hyphens, or underscores followed by a `.` and two to three letters.
The function returns `True` if the email address is valid, and `False` otherwise. The code also includes some test examples to demonstrate how to use the function.

Let’s try out the code generated above.

import re


def validate_email(email):
    # Use a regular expression to check if the email address is in the correct format
    pattern = r"^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$"
    if re.match(pattern, email):
        return True
    else:
        return False


print(validate_email("sam@@openai.com"))  # bad email address
print(validate_email("sam@openai"))  # bad email address
print(validate_email("sam@openai.com"))  # good email address
False
False
True

The generated code may sometimes need editing, but this one worked out-of-the-box.

Built-In Web App

OnPrem.LLM includes a built-in Web app to access the LLM. To start it, run the following command after installation:

onprem --port 8000

Then, enter localhost:8000 (or <domain_name>:8000 if running on remote server) in a Web browser to access the application:

screenshot

For more information, see the corresponding documentation.

Speeding Up Inference Using a GPU

The above example employed the use of a CPU. If you have a GPU (even an older one with less VRAM), you can speed up responses. See the LangChain docs on LLama.cpp for installing llama-cpp-python with GPU support for your system.

The steps below describe installing and using llama-cpp-python with cuBLAS support and can be employed for GPU acceleration on systems with NVIDIA GPUs (e.g., Linux, WSL2, Google Colab).

Step 1: Install llama-cpp-python with cuBLAS support

CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir

# For Mac users replace above with:
# CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir

Step 2: Use the n_gpu_layers argument with LLM

llm = LLM(n_gpu_layers=35)

The value for n_gpu_layers depends on your GPU memory and the model you’re using (e.g., max of 35 for default 7B model). You can reduce the value if you get an error (e.g., CUDA error: out-of-memory). For instance, using two old NVDIDIA TITAN V GPUs each with 12GB of VRAM, 59 out 83 layers in a quantized Llama-2 70B model can be offloaded to the GPUs (i.e., 60 layers or more results in a “CUDA out of memory” error).

With the steps above, calls to methods like llm.prompt will offload computation to your GPU and speed up responses from the LLM.

The above assumes that NVIDIA drivers and the CUDA toolkit are already installed. On Ubuntu Linux systems, this can be accomplished with a single command.

FAQ

  1. How do I use other models with OnPrem.LLM?

    You can supply the URL to other models to the LLM constructor, as we did above in the code generation example.

    As of v0.0.20, we support models in GGUF format, which supersedes the older GGML format. You can find llama.cpp-supported models with GGUF in the file name on huggingface.co.

    Make sure you are pointing to the URL of the actual GGUF model file, which is the “download” link on the model’s page. An example for Mistral-7B is shown below:

    screenshot

    Note that some models have specific prompts. For instance, the prompt template required for Zephyr-7B, as described on the model’s page, is:

    <|system|>\n</s>\n<|user|>\n{prompt}</s>\n<|assistant|>

    So, to use the Zephyr-7B model, you must supply the prompt_template argument to methods like LLM.ask and LLM.prompt (or specify it in the webapp.yml configuration for the Web app).

    # how to use Zephyr-7B with OnPrem.LLM
    llm = LLM(model_url='https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF/resolve/main/zephyr-7b-beta.Q4_K_M.gguf',
              prompt_template = "<|system|>\n</s>\n<|user|>\n{prompt}</s>\n<|assistant|>",
              n_gpu_layers=33)
     llm.prompt("List three cute names for a cat.")
    
  2. I’m behind a corporate firewall and am receiving an SSL error when trying to download the model?

    Try this:

    from onprem import LLM
    LLM.download_model(url, ssl_verify=False)
    
  3. How do I use this on a machine with no internet access?

    Use the LLM.download_model method to download the model files to <your_home_directory>/onprem_data and transfer them to the same location on the air-gapped machine.

    For the ingest and ask methods, you will need to also download and transfer the embedding model files:

    from sentence_transformers import SentenceTransformer
    model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
    model.save('/some/folder')
    

    Copy the some/folder folder to the air-gapped machine and supply the path to LLM via the embedding_model_name parameter.

  4. When installing onprem, I’m getting errors related to llama-cpp-python on Windows/Mac/Linux?

    See this LangChain documentation on LLama.cpp for help on installing the llama-cpp-python package for your system. Additional tips for different operating systems are shown below:

    For Linux systems like Ubuntu, try this: sudo apt-get install build-essential g++ clang. Other tips are here.

    For Windows systems, either use Windows Subsystem for Linux (WSL) or install Microsoft Visual Studio build tools and ensure the selections shown in this post are installed. WSL is recommended.

    For Macs, try following these tips.

    If you still have problems, there are various other tips for each of the above OSes in this privateGPT repo thread. Of course, you can also easily use OnPrem.LLM on Google Colab.

  5. llama-cpp-python is failing to load my model from the model path on Google Colab.

    For reasons that are unclear, newer versions of llama-cpp-python fail to load models on Google Colab unless you supply verbose=True to the LLM constructor (which is passed directly to llama-cpp-python). If you experience this problem locally, try supplying verbose=True to LLM.

  6. I’m getting an “Illegal instruction (core dumped) error when instantiating a langchain.llms.Llamacpp or onprem.LLM object?

    Your CPU may not support instructions that cmake is using for one reason or another (e.g., due to Hyper-V in VirtualBox settings). You can try turning them off when building and installing llama-cpp-python:

    # example
    CMAKE_ARGS="-DLLAMA_CUBLAS=ON -DLLAMA_AVX2=OFF -DLLAMA_AVX=OFF -DLLAMA_F16C=OFF -DLLAMA_FMA=OFF" FORCE_CMAKE=1 pip install --force-reinstall llama-cpp-python --no-cache-dir
    

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

onprem-0.0.33.tar.gz (39.3 kB view details)

Uploaded Source

Built Distribution

onprem-0.0.33-py3-none-any.whl (32.3 kB view details)

Uploaded Python 3

File details

Details for the file onprem-0.0.33.tar.gz.

File metadata

  • Download URL: onprem-0.0.33.tar.gz
  • Upload date:
  • Size: 39.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.4.2 requests/2.22.0 setuptools/45.2.0 requests-toolbelt/0.8.0 tqdm/4.30.0 CPython/3.8.10

File hashes

Hashes for onprem-0.0.33.tar.gz
Algorithm Hash digest
SHA256 d3804a4ac5fcb70a9cb52bf057dae9c1076e2d41a73278137b424da7af24881a
MD5 27355f706fc6490d87eb17b449237737
BLAKE2b-256 f4cff1e02695f27a4bc0abfe9c9950e1c240ecba17b410108f37ba926ad76fcd

See more details on using hashes here.

File details

Details for the file onprem-0.0.33-py3-none-any.whl.

File metadata

  • Download URL: onprem-0.0.33-py3-none-any.whl
  • Upload date:
  • Size: 32.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.4.2 requests/2.22.0 setuptools/45.2.0 requests-toolbelt/0.8.0 tqdm/4.30.0 CPython/3.8.10

File hashes

Hashes for onprem-0.0.33-py3-none-any.whl
Algorithm Hash digest
SHA256 d4969fee4e612f2d2e87ef3df6f1c004a7ee62e9caaaf33c08a1c6de43f7ffc5
MD5 f65d0e9ad205d1eb879bf14d7413fc5d
BLAKE2b-256 04e5a159a016b375983de40b5e126dd8ba0db2718e6a001388e82576a035aa83

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page