Skip to main content

Python bindings for llama.cpp

Project description

PyLLaMACpp

License: MIT PyPi version

Python bindings for llama.cpp

For those who don't know, llama.cpp is a port of Facebook's LLaMA model in pure C/C++:

  • Without dependencies
  • Apple silicon first-class citizen - optimized via ARM NEON
  • AVX2 support for x86 architectures
  • Mixed F16 / F32 precision
  • 4-bit quantization support
  • Runs on the CPU

Table of contents

Installation

  1. The easy way is to install the prebuilt wheels
pip install pyllamacpp

However, the compilation process of llama.cpp is taking into account the architecture of the target CPU, so you might need to build it from source:

pip install git+https://github.com/abdeladim-s/pyllamacpp.git

CLI

You can run the following simple command line interface to test the package once it is installed:

pyllamacpp path/to/ggml/model
pyllamacpp -h

usage: pyllamacpp [-h] [--n_ctx N_CTX] [--n_parts N_PARTS] [--seed SEED] [--f16_kv F16_KV] [--logits_all LOGITS_ALL]
                  [--vocab_only VOCAB_ONLY] [--use_mlock USE_MLOCK] [--embedding EMBEDDING] [--n_predict N_PREDICT] [--n_threads N_THREADS]
                  [--repeat_last_n REPEAT_LAST_N] [--top_k TOP_K] [--top_p TOP_P] [--temp TEMP] [--repeat_penalty REPEAT_PENALTY]
                  [--n_batch N_BATCH]
                  model

This is like a chatbot, You can start the conversation with `Hi, can you help me ?` Pay attention though that it may hallucinate!

positional arguments:
  model                 The path of the model file

options:
  -h, --help            show this help message and exit
  --n_ctx N_CTX         text context
  --n_parts N_PARTS
  --seed SEED           RNG seed
  --f16_kv F16_KV       use fp16 for KV cache
  --logits_all LOGITS_ALL
                        the llama_eval() call computes all logits, not just the last one
  --vocab_only VOCAB_ONLY
                        only load the vocabulary, no weights
  --use_mlock USE_MLOCK
                        force system to keep model in RAM
  --embedding EMBEDDING
                        embedding mode only
  --n_predict N_PREDICT
                        Number of tokens to predict
  --n_threads N_THREADS
                        Number of threads
  --repeat_last_n REPEAT_LAST_N
                        Last n tokens to penalize
  --top_k TOP_K         top_k
  --top_p TOP_P         top_p
  --temp TEMP           temp
  --repeat_penalty REPEAT_PENALTY
                        repeat_penalty
  --n_batch N_BATCH     batch size for prompt processing

Tutorial

Quick start

A simple Pythonic API is built on top of llama.cpp C/C++ functions. You can call it from Python as follows:

from pyllamacpp.model import Model

model = Model(ggml_model='./models/gpt4all-model.bin')
for token in model.generate("Tell me a joke ?"):
    print(token, end='')

Interactive Dialogue

You can set up an interactive dialogue by simply keeping the model variable alive:

from pyllamacpp.model import Model

model = Model(model_path='/path/to/ggml/model')
while True:
    try:
        prompt = input("You: ", flush=True)
        if prompt == '':
            continue
        print(f"AI:", end='')
        for tok in model.generate(prompt):
            print(f"{tok}", end='', flush=True)
        print()
    except KeyboardInterrupt:
        break

Attribute a persona to the language model

The following is an example showing how to "attribute a persona to the language model" :

from pyllamacpp.model import Model

prompt_context = """Act as Bob. Bob is helpful, kind, honest,
and never fails to answer the User's requests immediately and with precision. 

User: Nice to meet you Bob!
Bob: Welcome! I'm here to assist you with anything you need. What can I do for you today?
"""

prompt_prefix = "\nUser:"
prompt_suffix = "\nBob:"

model = Model(model_path='/path/to/ggml/model', 
              prompt_context=prompt_context, 
              prompt_prefix=prompt_prefix,
              prompt_suffix=prompt_suffix)

sequence = ''
stop_word = prompt_prefix.strip()

while True:
    try:
        prompt = input("You: ")
        if prompt == '':
            continue
        print(f"AI: ", end='')
        for token in model.generate(prompt):
            if token == '\n':
                sequence += token
                continue
            if len(sequence) != 0:
                if stop_word.startswith(sequence.strip()):
                    sequence += token
                    if sequence.strip() == stop_word:
                        sequence = ''
                        break
                    else:
                        continue
                else:
                    print(f"{sequence}", end='', flush=True)
                    sequence = ''
            print(f"{token}", end='', flush=True)

        print()
    except KeyboardInterrupt:
        break

API reference

You can check the API reference documentation for more details.

Supported models

Fully tested with GPT4All model, see PyGPT4All.

But all models supported by llama.cpp should be supported as well:

Supported models:

Discussions and contributions

If you find any bug, please open an issue.

If you have any feedback, or you want to share how you are using this project, feel free to use the Discussions and open a new topic.

License

This project is licensed under the same license as llama.cpp (MIT License).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyllamacpp-2.1.0.tar.gz (225.5 kB view hashes)

Uploaded Source

Built Distributions

pyllamacpp-2.1.0-pp39-pypy39_pp73-win_amd64.whl (193.4 kB view hashes)

Uploaded PyPy Windows x86-64

pyllamacpp-2.1.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (271.1 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

pyllamacpp-2.1.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl (279.7 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ i686

pyllamacpp-2.1.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl (236.9 kB view hashes)

Uploaded PyPy macOS 10.9+ x86-64

pyllamacpp-2.1.0-pp38-pypy38_pp73-win_amd64.whl (193.5 kB view hashes)

Uploaded PyPy Windows x86-64

pyllamacpp-2.1.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270.8 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

pyllamacpp-2.1.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl (279.5 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ i686

pyllamacpp-2.1.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl (237.0 kB view hashes)

Uploaded PyPy macOS 10.9+ x86-64

pyllamacpp-2.1.0-cp311-cp311-win_amd64.whl (194.4 kB view hashes)

Uploaded CPython 3.11 Windows x86-64

pyllamacpp-2.1.0-cp311-cp311-win32.whl (161.6 kB view hashes)

Uploaded CPython 3.11 Windows x86

pyllamacpp-2.1.0-cp311-cp311-musllinux_1_1_x86_64.whl (795.1 kB view hashes)

Uploaded CPython 3.11 musllinux: musl 1.1+ x86-64

pyllamacpp-2.1.0-cp311-cp311-musllinux_1_1_i686.whl (856.0 kB view hashes)

Uploaded CPython 3.11 musllinux: musl 1.1+ i686

pyllamacpp-2.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270.6 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

pyllamacpp-2.1.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl (278.6 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ i686

pyllamacpp-2.1.0-cp311-cp311-macosx_10_9_x86_64.whl (237.5 kB view hashes)

Uploaded CPython 3.11 macOS 10.9+ x86-64

pyllamacpp-2.1.0-cp311-cp311-macosx_10_9_universal2.whl (431.8 kB view hashes)

Uploaded CPython 3.11 macOS 10.9+ universal2 (ARM64, x86-64)

pyllamacpp-2.1.0-cp310-cp310-win_amd64.whl (194.4 kB view hashes)

Uploaded CPython 3.10 Windows x86-64

pyllamacpp-2.1.0-cp310-cp310-win32.whl (161.6 kB view hashes)

Uploaded CPython 3.10 Windows x86

pyllamacpp-2.1.0-cp310-cp310-musllinux_1_1_x86_64.whl (795.0 kB view hashes)

Uploaded CPython 3.10 musllinux: musl 1.1+ x86-64

pyllamacpp-2.1.0-cp310-cp310-musllinux_1_1_i686.whl (856.0 kB view hashes)

Uploaded CPython 3.10 musllinux: musl 1.1+ i686

pyllamacpp-2.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270.6 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

pyllamacpp-2.1.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl (278.7 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ i686

pyllamacpp-2.1.0-cp310-cp310-macosx_10_9_x86_64.whl (237.5 kB view hashes)

Uploaded CPython 3.10 macOS 10.9+ x86-64

pyllamacpp-2.1.0-cp310-cp310-macosx_10_9_universal2.whl (431.7 kB view hashes)

Uploaded CPython 3.10 macOS 10.9+ universal2 (ARM64, x86-64)

pyllamacpp-2.1.0-cp39-cp39-win_amd64.whl (194.5 kB view hashes)

Uploaded CPython 3.9 Windows x86-64

pyllamacpp-2.1.0-cp39-cp39-win32.whl (161.7 kB view hashes)

Uploaded CPython 3.9 Windows x86

pyllamacpp-2.1.0-cp39-cp39-musllinux_1_1_x86_64.whl (795.5 kB view hashes)

Uploaded CPython 3.9 musllinux: musl 1.1+ x86-64

pyllamacpp-2.1.0-cp39-cp39-musllinux_1_1_i686.whl (857.4 kB view hashes)

Uploaded CPython 3.9 musllinux: musl 1.1+ i686

pyllamacpp-2.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270.6 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

pyllamacpp-2.1.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl (279.1 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ i686

pyllamacpp-2.1.0-cp39-cp39-macosx_10_9_x86_64.whl (237.6 kB view hashes)

Uploaded CPython 3.9 macOS 10.9+ x86-64

pyllamacpp-2.1.0-cp39-cp39-macosx_10_9_universal2.whl (432.0 kB view hashes)

Uploaded CPython 3.9 macOS 10.9+ universal2 (ARM64, x86-64)

pyllamacpp-2.1.0-cp38-cp38-win_amd64.whl (194.3 kB view hashes)

Uploaded CPython 3.8 Windows x86-64

pyllamacpp-2.1.0-cp38-cp38-win32.whl (161.7 kB view hashes)

Uploaded CPython 3.8 Windows x86

pyllamacpp-2.1.0-cp38-cp38-musllinux_1_1_x86_64.whl (795.1 kB view hashes)

Uploaded CPython 3.8 musllinux: musl 1.1+ x86-64

pyllamacpp-2.1.0-cp38-cp38-musllinux_1_1_i686.whl (858.0 kB view hashes)

Uploaded CPython 3.8 musllinux: musl 1.1+ i686

pyllamacpp-2.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270.4 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

pyllamacpp-2.1.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl (278.7 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ i686

pyllamacpp-2.1.0-cp38-cp38-macosx_10_9_x86_64.whl (237.5 kB view hashes)

Uploaded CPython 3.8 macOS 10.9+ x86-64

pyllamacpp-2.1.0-cp38-cp38-macosx_10_9_universal2.whl (431.8 kB view hashes)

Uploaded CPython 3.8 macOS 10.9+ universal2 (ARM64, x86-64)

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page