Official Python CPU inference for GPT4All language models based on llama.cpp and ggml
Project description
PyGPT4All
Official Python CPU inference for GPT4All language models based on llama.cpp and ggml.
NB: Under active development
Installation
- The easy way is to use the prebuilt wheels
pip install pygpt4all
- Build it from source:
git clone --recursive https://github.com/nomic-ai/pygpt4all && cd pygpt4all
pip install .
Usage
GPT4All model
Download a GPT4All model from https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/. The easiest approach is download a file whose name ends in ggml.bin
from pygpt4all.models.gpt4all import GPT4All
def new_text_callback(text):
print(text, end="")
model = GPT4All('./models/ggml-gpt4all-j.bin')
model.generate("Once upon a time, ", n_predict=55, new_text_callback=new_text_callback)
GPT4All-J model
Download the GPT4All-J model from https://gpt4all.io/models/ggml-gpt4all-j-v1.2-jazzy.bin
from pygpt4all.models.gpt4all_j import GPT4All_J
def new_text_callback(text):
print(text, end="")
model = GPT4All_J('./models/ggml-gpt4all-j.bin')
model.generate("Once upon a time, ", n_predict=55, new_text_callback=new_text_callback)
License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
pygpt4all-1.0.0.tar.gz
(2.6 kB
view hashes)