Skip to main content

Python bindings for the C++ port of GPT4All-J model.

Project description

GPT4All-J PyPI tests

Python bindings for the C++ port of GPT4All-J model.

Installation

pip install gpt4all-j

Download the model from here.

Usage

from gpt4allj import Model

model = Model('/path/to/ggml-gpt4all-j.bin')

print(model.generate('AI is going to'))

Run in Google Colab

If you are getting illegal instruction error, try using instructions='avx' or instructions='basic':

model = Model('/path/to/ggml-gpt4all-j.bin', instructions='avx')

If it is running slow, try building the C++ library from source. Learn more

Parameters

model.generate(prompt,
               seed=-1,
               n_threads=-1,
               n_predict=200,
               top_k=40,
               top_p=0.9,
               temp=0.9,
               repeat_penalty=1.0,
               repeat_last_n=64,
               n_batch=8,
               callback=None)

callback

If a callback function is passed, it will be called once per each generated token. To stop generating more tokens, return False inside the callback function.

def callback(token):
    print(token)

model.generate('AI is going to', callback=callback)

LangChain

LangChain is a framework for developing applications powered by language models. A LangChain LLM object for the GPT4All-J model can be created using:

from gpt4allj.langchain import GPT4AllJ

llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin')

print(llm('AI is going to'))

If you are getting illegal instruction error, try using instructions='avx' or instructions='basic':

llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin', instructions='avx')

It can be used with other LangChain modules:

from langchain import PromptTemplate, LLMChain

template = """Question: {question}

Answer:"""

prompt = PromptTemplate(template=template, input_variables=['question'])

llm_chain = LLMChain(prompt=prompt, llm=llm)

print(llm_chain.run('What is AI?'))

Parameters

llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin',
               seed=-1,
               n_threads=-1,
               n_predict=200,
               top_k=40,
               top_p=0.9,
               temp=0.9,
               repeat_penalty=1.0,
               repeat_last_n=64,
               n_batch=8)

C++ Library

To build the C++ library from source, please see gptj.cpp. Once you have built the shared libraries, you can use them as:

from gpt4allj import Model, load_library

lib = load_library('/path/to/libgptj.so', '/path/to/libggml.so')

model = Model('/path/to/ggml-gpt4all-j.bin', lib=lib)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpt4all-j-0.2.4.tar.gz (1.8 MB view details)

Uploaded Source

File details

Details for the file gpt4all-j-0.2.4.tar.gz.

File metadata

  • Download URL: gpt4all-j-0.2.4.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for gpt4all-j-0.2.4.tar.gz
Algorithm Hash digest
SHA256 c9379bdf7deb8e01ff0730c14051290b3e1d1547a0bc3e10ab2aae3c95a1b763
MD5 4a9d11f66accfaf28161051ed25569c5
BLAKE2b-256 b7885fd1bb24b4724f6749ce4de5894aca624fd74223c9be0b399b575909a7c1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page