Skip to main content

Python bindings for the C++ port of GPT4All-J model.

Project description

GPT4All-J PyPI tests

Python bindings for the C++ port of GPT4All-J model.

Installation

pip install gpt4all-j

Download the model from here.

Usage

from gpt4allj import Model

model = Model('/path/to/ggml-gpt4all-j.bin')

print(model.generate('AI is going to'))

Run in Google Colab

If you are getting illegal instruction error, try using instructions='avx' or instructions='basic':

model = Model('/path/to/ggml-gpt4all-j.bin', instructions='avx')

If it is running slow, try building the C++ library from source. Learn more

Parameters

model.generate(prompt,
               seed=-1,
               n_threads=-1,
               n_predict=200,
               top_k=40,
               top_p=0.9,
               temp=0.9,
               repeat_penalty=1.0,
               repeat_last_n=64,
               n_batch=8,
               reset=True,
               callback=None)

reset

If True, context will be reset. To keep the previous context, use reset=False.

model.generate('Write code to sort numbers in Python.')
model.generate('Rewrite the code in JavaScript.', reset=False)

callback

If a callback function is passed, it will be called once per each generated token. To stop generating more tokens, return False inside the callback function.

def callback(token):
    print(token)

model.generate('AI is going to', callback=callback)

LangChain

LangChain is a framework for developing applications powered by language models. A LangChain LLM object for the GPT4All-J model can be created using:

from gpt4allj.langchain import GPT4AllJ

llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin')

print(llm('AI is going to'))

If you are getting illegal instruction error, try using instructions='avx' or instructions='basic':

llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin', instructions='avx')

It can be used with other LangChain modules:

from langchain import PromptTemplate, LLMChain

template = """Question: {question}

Answer:"""

prompt = PromptTemplate(template=template, input_variables=['question'])

llm_chain = LLMChain(prompt=prompt, llm=llm)

print(llm_chain.run('What is AI?'))

Parameters

llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin',
               seed=-1,
               n_threads=-1,
               n_predict=200,
               top_k=40,
               top_p=0.9,
               temp=0.9,
               repeat_penalty=1.0,
               repeat_last_n=64,
               n_batch=8,
               reset=True)

C++ Library

To build the C++ library from source, please see gptj.cpp. Once you have built the shared libraries, you can use them as:

from gpt4allj import Model, load_library

lib = load_library('/path/to/libgptj.so', '/path/to/libggml.so')

model = Model('/path/to/ggml-gpt4all-j.bin', lib=lib)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpt4all-j-0.2.5.tar.gz (1.8 MB view details)

Uploaded Source

File details

Details for the file gpt4all-j-0.2.5.tar.gz.

File metadata

  • Download URL: gpt4all-j-0.2.5.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for gpt4all-j-0.2.5.tar.gz
Algorithm Hash digest
SHA256 d2b20ba6d8f885e4b353264917a8e7e6b7b002e0f7ef7419bd3a3cb07691b0e8
MD5 8b27d03b02f0b0cadace524b5a214109
BLAKE2b-256 ea7cfc730e7da909f7426e35684bbbaf8e84ecf548d2ad3dfa88821208931fea

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page