Skip to main content

Official Python CPU inference for GPT4All language models based on llama.cpp and ggml

Project description

PyGPT4All

Official Python CPU inference for GPT4All language models based on llama.cpp and ggml

License: MIT PyPi version

Installation

pip install pygpt4all

Tutorial

You will need first to download the model weights

Model Download link
GPT4ALL http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin
GPT4ALL-j https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin

Model instantiation

Once the weights are downloaded, you can instantiate the models as follows:

  • GPT4All model
from pygpt4all import GPT4All

model = GPT4All('path/to/ggml-gpt4all-l13b-snoozy.bin')
  • GPT4All-J model
from pygpt4all import GPT4All_J

model = GPT4All_J('path/to/ggml-gpt4all-j-v1.3-groovy.bin')

Simple generation

The generate function is used to generate new tokens from the prompt given as input:

for token in model.generate("Tell me a joke ?\n"):
    print(token, end='', flush=True)

Interactive Dialogue

You can set up an interactive dialogue by simply keeping the model variable alive:

while True:
    try:
        prompt = input("You: ", flush=True)
        if prompt == '':
            continue
        print(f"AI:", end='')
        for token in model.generate(prompt):
            print(f"{token}", end='', flush=True)
        print()
    except KeyboardInterrupt:
        break

API reference

You can check the API reference documentation for more details.

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pygpt4all-1.1.0.tar.gz (4.4 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page