Skip to main content

A GPT-J api to use with python3 to generate text, blogs, code, and more (Note: Starting with version 3.0.7 the api is using the old domain again so there might be some issues with limits)

Project description

GPT-J

A GPT-J API to use with python

Installing gpt-j

pip install gptj

Parameters

prompt: the prompt you wish to give to the model

tokens: the number of tokens to generate (values 204 or less are recommended)

temperature: controls the randomness of the model. higher values will be more random (suggestest to keep under 1.0 or less, something like 0.3 works)

top_p: top probability will use the most likely tokens

top_k: Top k probability

rep: The likely hood of the model repeating the same tokens lower values are more repetative

Advanced Parameters

user: the speaker the person who is giving gpt-j a prompt

bot: an imaginary character of your choice

context: the part of the prompt that explains what is happening in the dialog

examples: a dictionary of user intentions and how the bot should respond

Basic Usage

In the prompt enter something you want to generate

from gpt_j.Basic_api import simple_completion

prompt = "def perfect_square(num):"

The maximum length of the output response

max_length = 100

Temperature controls the creativity of the model

A low temperature means the model will take less changes when completing a prompt

A high temperature will make the model more creative

Both temperature and top probability must be a float

temperature = 0.09

top probability is an alternative way to control the randomness of the model

If you are using top probability set temperature one

If you are using temperature set top probability to one

top_probability = 1.0

top k is an integer value that controls part of the model

top_k = 40

Repetition penalty will result in less repetative results

repetition = 0.216

Initializing the SimpleCompletion class

Here you set query equal to the desired values

Note values higher than 512 tend to take more time to generate

query = simple_completion(prompt, length=max_length, temp=temperature, top_p=top_probability, top_k=top_k, rep=repetition)

Finally run the function below

print(query)

Advanced Usage

Context is a string that is a description of the conversation

from gpt_j.gptj_api import Completion

context = "This is a calculator bot that will answer basic math questions"

Examples should be a dictionary of {user query: the way the model should respond to the given query} list of examples

Queries are to the left while target responses should be to the right

Here we can see the user is asking the model math related questions

The way the model should respond if given on the right

DO NOT USE PERIODS AT THE END OF USER EXAMPLE!

examples = {
    "5 + 5": "10",
    "6 - 2": "4",
    "4 * 15": "60",
    "10 / 5": "2",
    "144 / 24": "6",
    "7 + 1": "8"}

Here you pass in the context and the examples

context_setting = Completion(context, examples)

Enter a prompt relevant to previous defined user queries

prompt = "48 / 6"

Pick a name relevant to what you are doing

Below you can change student to "Task" for example and get similar results

User = "Student"

Name your imaginary friend anything you want

Bot = "Calculator"

Max tokens is the maximum length of the output response

max_tokens = 50

Temperature controls the randomness of the model

A low temperature means the model will take less changes when completing a prompt

A high temperature will make the model more creative and produce more random outputs

A Note both temperature and top probability must be a float

temperature = 0.09

Top probability is an alternative way to control the randomness of the model

If you are using it set temperature one

If you are using temperature set top probability to one

top_probability = 1.0

top k is an integer value that controls part of the model

top_k = 40

Repetition penalty will result in less repetative results

repetition = 0.216

Simply set all the give all the parameters

Unfilled parameters will be default values

I recommend all parameters are filled for better results

Once everything is done execute the code below

response = context_setting.completion(prompt,
              user=User,
              bot=Bot,
              max_tokens=max_tokens,
              temperature=temperature,
              top_p=top_probability,
              top_k=top_k,
              rep=reptition)

Last but not least print the response

Please be patient depending on the given parameters it will take longer sometimes

For quick responses just use the Basic API which is a simplified version

print(response)

Note: This a very small model of 6B parameters and won't always produce accurate results

Disclaimer

I have removed the security from the API, please don't use for unethical use! I am not responsible for anything you do with the API

License and copyright

Credit

This is all possible thanks to https://github.com/vicgalle/gpt-j-api

Feel free to check out the original API

License

© Michael D Arana

licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gptj-3.0.9.tar.gz (9.2 kB view details)

Uploaded Source

Built Distribution

gptj-3.0.9-py3-none-any.whl (8.5 kB view details)

Uploaded Python 3

File details

Details for the file gptj-3.0.9.tar.gz.

File metadata

  • Download URL: gptj-3.0.9.tar.gz
  • Upload date:
  • Size: 9.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.4

File hashes

Hashes for gptj-3.0.9.tar.gz
Algorithm Hash digest
SHA256 7e8fb507861d5ee5ce1056aec72d852b0c924ee119bf99f6fd2083c39c658aa4
MD5 9dda563955cd90fb7c75db474bc1c0d8
BLAKE2b-256 db9548e77b892392e6de70d27cc7e33712b6f5ba351cbadac62f719b67afdfdb

See more details on using hashes here.

File details

Details for the file gptj-3.0.9-py3-none-any.whl.

File metadata

  • Download URL: gptj-3.0.9-py3-none-any.whl
  • Upload date:
  • Size: 8.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.4

File hashes

Hashes for gptj-3.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 2527b583fb7a8eb62204179a0d7785ee7d8a35ebd1757d54208962269997b750
MD5 d88fca149ad6027f2d57b589d266fe5f
BLAKE2b-256 791415840e1a72c2180e5662171ef695e3b7351235a7371a5f7e0f347acd036d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page