Python bindings for the C++ port of GPT4All-J model.
Project description
GPT4All-J
Python bindings for the C++ port of GPT4All-J model.
Please migrate to
ctransformers
library which supports more models and has more features.
Installation
pip install gpt4all-j
Download the model from here.
Usage
from gpt4allj import Model
model = Model('/path/to/ggml-gpt4all-j.bin')
print(model.generate('AI is going to'))
If you are getting illegal instruction
error, try using instructions='avx'
or instructions='basic'
:
model = Model('/path/to/ggml-gpt4all-j.bin', instructions='avx')
If it is running slow, try building the C++ library from source. Learn more
Parameters
model.generate(prompt,
seed=-1,
n_threads=-1,
n_predict=200,
top_k=40,
top_p=0.9,
temp=0.9,
repeat_penalty=1.0,
repeat_last_n=64,
n_batch=8,
reset=True,
callback=None)
reset
If True
, context will be reset. To keep the previous context, use reset=False
.
model.generate('Write code to sort numbers in Python.')
model.generate('Rewrite the code in JavaScript.', reset=False)
callback
If a callback function is passed, it will be called once per each generated token. To stop generating more tokens, return False
inside the callback function.
def callback(token):
print(token)
model.generate('AI is going to', callback=callback)
LangChain
LangChain is a framework for developing applications powered by language models. A LangChain LLM object for the GPT4All-J model can be created using:
from gpt4allj.langchain import GPT4AllJ
llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin')
print(llm('AI is going to'))
If you are getting illegal instruction
error, try using instructions='avx'
or instructions='basic'
:
llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin', instructions='avx')
It can be used with other LangChain modules:
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer:"""
prompt = PromptTemplate(template=template, input_variables=['question'])
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run('What is AI?'))
Parameters
llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin',
seed=-1,
n_threads=-1,
n_predict=200,
top_k=40,
top_p=0.9,
temp=0.9,
repeat_penalty=1.0,
repeat_last_n=64,
n_batch=8,
reset=True)
C++ Library
To build the C++ library from source, please see gptj.cpp. Once you have built the shared libraries, you can use them as:
from gpt4allj import Model, load_library
lib = load_library('/path/to/libgptj.so', '/path/to/libggml.so')
model = Model('/path/to/ggml-gpt4all-j.bin', lib=lib)
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file gpt4all-j-0.2.6.tar.gz
.
File metadata
- Download URL: gpt4all-j-0.2.6.tar.gz
- Upload date:
- Size: 1.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d2681cc4b7974586ecd4fa614fbb7e315bb787944a66f6b5e103f202c004fa40 |
|
MD5 | 373a8e6526f4981258964904c67f8cd5 |
|
BLAKE2b-256 | 5a487c6c8a4d3262b77f8b909b2ca49d3d2196ce5bbbbad8192533a7a9da26d3 |