Skip to main content

Python bindings for llama.cpp + gpt4all

Project description

PyLLaMACpp

Official supported Python bindings for llama.cpp + gpt4all

License: MIT PyPi version

For those who don't know, llama.cpp is a port of Facebook's LLaMA model in pure C/C++:

  • Without dependencies
  • Apple silicon first-class citizen - optimized via ARM NEON
  • AVX2 support for x86 architectures
  • Mixed F16 / F32 precision
  • 4-bit quantization support
  • Runs on the CPU

Table of contents

Installation

  1. The easy way is to use the prebuilt wheels
pip install pyllamacpp

However, the compilation process of llama.cpp is taking into account the architecture of the target CPU, so you might need to build it from source:

git clone --recursive https://github.com/nomic-ai/pyllamacpp && cd pyllamacpp
pip install .

Usage

A simple Pythonic API is built on top of llama.cpp C/C++ functions. You can call it from Python as follows:

from pyllamacpp.model import Model

def new_text_callback(text: str):
    print(text, end="", flush=True)

model = Model(ggml_model='./models/gpt4all-model.bin', n_ctx=512)
model.generate("Once upon a time, ", n_predict=55, new_text_callback=new_text_callback, n_threads=8)

If you don't want to use the callback, you can get the results from the generate method once the inference is finished:

generated_text = model.generate("Once upon a time, ", n_predict=55)
print(generated_text)

Interactive Mode

If you want to run the program in interactive mode you can add the grab_text_callback function and set interactive to True in the generate function. grab_text_callback should always return a string unless you wish to signal EOF in which case you should return None.

from pyllamacpp.model import Model

def new_text_callback(text: str):
    print(text, end="", flush=True)

def grab_text_callback():
    inpt = input()
    # To signal EOF, return None
    if inpt == "END":
        return None
    return inpt

model = Model(ggml_model='./models/gpt4all-model.bin', n_ctx=512)

# prompt from https://github.com/ggerganov/llama.cpp/blob/master/prompts/chat-with-bob.txt
prompt = """
Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision. To do this, Bob uses a database of information collected from many different sources, including books, journals, online articles, and more.

User: Hello, Bob.
Bob: Hello. How may I help you today?
User: Please tell me the largest city in Europe.
Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
User:"""

model.generate(prompt, n_predict=256, new_text_callback=new_text_callback, grab_text_callback=grab_text_callback, interactive=True, repeat_penalty=1.0, antiprompt=["User:"])
  • You can pass any llama context parameter as a keyword argument to the Model class
  • You can pass any gpt parameter as a keyword argument to the generarte method
  • You can always refer to the short documentation for more details.

Supported model

GPT4All

Download a GPT4All model from https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/. The easiest approach is download a file whose name ends in ggml.bin--older model versions require conversion.

If you have an older model downloaded that you want to convert, in your terminal run:

pyllamacpp-convert-gpt4all path/to/gpt4all_model.bin path/to/llama_tokenizer path/to/gpt4all-converted.bin

FAQs

  • Where to find the llama tokenizer? #5

Discussions and contributions

If you find any bug, please open an issue.

If you have any feedback, or you want to share how you are using this project, feel free to use the Discussions and open a new topic.

License

This project is licensed under the same license as llama.cpp (MIT License).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyllamacpp-1.0.7.tar.gz (222.5 kB view hashes)

Uploaded Source

Built Distributions

pyllamacpp-1.0.7-pp39-pypy39_pp73-win_amd64.whl (191.1 kB view hashes)

Uploaded PyPy Windows x86-64

pyllamacpp-1.0.7-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270.0 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

pyllamacpp-1.0.7-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl (279.3 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ i686

pyllamacpp-1.0.7-pp39-pypy39_pp73-macosx_10_9_x86_64.whl (235.8 kB view hashes)

Uploaded PyPy macOS 10.9+ x86-64

pyllamacpp-1.0.7-pp38-pypy38_pp73-win_amd64.whl (191.0 kB view hashes)

Uploaded PyPy Windows x86-64

pyllamacpp-1.0.7-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (269.9 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

pyllamacpp-1.0.7-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl (279.2 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ i686

pyllamacpp-1.0.7-pp38-pypy38_pp73-macosx_10_9_x86_64.whl (235.8 kB view hashes)

Uploaded PyPy macOS 10.9+ x86-64

pyllamacpp-1.0.7-cp311-cp311-win_amd64.whl (192.1 kB view hashes)

Uploaded CPython 3.11 Windows x86-64

pyllamacpp-1.0.7-cp311-cp311-win32.whl (160.8 kB view hashes)

Uploaded CPython 3.11 Windows x86

pyllamacpp-1.0.7-cp311-cp311-musllinux_1_1_x86_64.whl (792.2 kB view hashes)

Uploaded CPython 3.11 musllinux: musl 1.1+ x86-64

pyllamacpp-1.0.7-cp311-cp311-musllinux_1_1_i686.whl (853.9 kB view hashes)

Uploaded CPython 3.11 musllinux: musl 1.1+ i686

pyllamacpp-1.0.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270.0 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

pyllamacpp-1.0.7-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl (279.8 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ i686

pyllamacpp-1.0.7-cp311-cp311-macosx_10_9_x86_64.whl (235.7 kB view hashes)

Uploaded CPython 3.11 macOS 10.9+ x86-64

pyllamacpp-1.0.7-cp311-cp311-macosx_10_9_universal2.whl (427.8 kB view hashes)

Uploaded CPython 3.11 macOS 10.9+ universal2 (ARM64, x86-64)

pyllamacpp-1.0.7-cp310-cp310-win_amd64.whl (192.1 kB view hashes)

Uploaded CPython 3.10 Windows x86-64

pyllamacpp-1.0.7-cp310-cp310-win32.whl (160.8 kB view hashes)

Uploaded CPython 3.10 Windows x86

pyllamacpp-1.0.7-cp310-cp310-musllinux_1_1_x86_64.whl (792.2 kB view hashes)

Uploaded CPython 3.10 musllinux: musl 1.1+ x86-64

pyllamacpp-1.0.7-cp310-cp310-musllinux_1_1_i686.whl (853.9 kB view hashes)

Uploaded CPython 3.10 musllinux: musl 1.1+ i686

pyllamacpp-1.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270.1 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

pyllamacpp-1.0.7-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl (279.9 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ i686

pyllamacpp-1.0.7-cp310-cp310-macosx_10_9_x86_64.whl (235.7 kB view hashes)

Uploaded CPython 3.10 macOS 10.9+ x86-64

pyllamacpp-1.0.7-cp310-cp310-macosx_10_9_universal2.whl (427.8 kB view hashes)

Uploaded CPython 3.10 macOS 10.9+ universal2 (ARM64, x86-64)

pyllamacpp-1.0.7-cp39-cp39-win_amd64.whl (192.2 kB view hashes)

Uploaded CPython 3.9 Windows x86-64

pyllamacpp-1.0.7-cp39-cp39-win32.whl (160.9 kB view hashes)

Uploaded CPython 3.9 Windows x86

pyllamacpp-1.0.7-cp39-cp39-musllinux_1_1_x86_64.whl (793.0 kB view hashes)

Uploaded CPython 3.9 musllinux: musl 1.1+ x86-64

pyllamacpp-1.0.7-cp39-cp39-musllinux_1_1_i686.whl (854.0 kB view hashes)

Uploaded CPython 3.9 musllinux: musl 1.1+ i686

pyllamacpp-1.0.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270.3 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

pyllamacpp-1.0.7-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl (280.1 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ i686

pyllamacpp-1.0.7-cp39-cp39-macosx_10_9_x86_64.whl (235.8 kB view hashes)

Uploaded CPython 3.9 macOS 10.9+ x86-64

pyllamacpp-1.0.7-cp39-cp39-macosx_10_9_universal2.whl (428.0 kB view hashes)

Uploaded CPython 3.9 macOS 10.9+ universal2 (ARM64, x86-64)

pyllamacpp-1.0.7-cp38-cp38-win_amd64.whl (192.0 kB view hashes)

Uploaded CPython 3.8 Windows x86-64

pyllamacpp-1.0.7-cp38-cp38-win32.whl (160.8 kB view hashes)

Uploaded CPython 3.8 Windows x86

pyllamacpp-1.0.7-cp38-cp38-musllinux_1_1_x86_64.whl (792.1 kB view hashes)

Uploaded CPython 3.8 musllinux: musl 1.1+ x86-64

pyllamacpp-1.0.7-cp38-cp38-musllinux_1_1_i686.whl (853.2 kB view hashes)

Uploaded CPython 3.8 musllinux: musl 1.1+ i686

pyllamacpp-1.0.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (270.0 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

pyllamacpp-1.0.7-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl (279.7 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ i686

pyllamacpp-1.0.7-cp38-cp38-macosx_10_9_x86_64.whl (235.8 kB view hashes)

Uploaded CPython 3.8 macOS 10.9+ x86-64

pyllamacpp-1.0.7-cp38-cp38-macosx_10_9_universal2.whl (427.8 kB view hashes)

Uploaded CPython 3.8 macOS 10.9+ universal2 (ARM64, x86-64)

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page