Skip to main content

GPT2 text generation with just two lines of code!

Project description

Chatting Transformer

Easy text generation using state of the art NLP models.

License Downloads

Chatting Transformer is a Python library for generating text using GPT2. GPT-2 is a language model that was developed by OpenAI that specializes in generating text. By using Chatting Transformer, you can implement and use this model with just two lines of code.

Installation

pip install chattingtransformer

Basic Usage

from chattingtransformer import ChattingGPT2


model_name = "gpt2" 
gpt2 = ChattingGPT2(model_name)

text = "In 10 years, AI will " 
result = gpt2.generate_text(text) 

print(result) # Outputs: In 10 years, AI will  have revolutionized the way we interact with the world...

Available Models

Model Parameters Size
gpt2 134 M 548 MB
gpt2-medium 335 M 1.52 GB
gpt2-large 774 M 3.25 GB
gpt2-xl 1.5 B 6.43 GB
from chattingtransformer import ChattingGPT2

gpt2 = ChattingGPT2("gpt2")
gpt2_medium = ChattingGPT2("gpt2-medium")
gpt2_large = ChattingGPT2("gpt2-large")
gpt2_xl = ChattingGPT2("gpt2-xl")

Predefined Methods

Below are predfined methods that may be used to determine the output. To learn more, about these methods, please visit this webpage.

  1. "greedy"
  2. "beam-search"
  3. "generic-sampling"
  4. "top-k-sampling"
  5. "top-p-nucleus-sampling"
from chattingtransformer import ChattingGPT2

gpt2 = ChattingGPT2("gpt2")
text = "I think therefore I "
greedy_output = gpt2.generate_text(text, method = "greedy")
beam_search_output= gpt2.generate_text(text, method = "beam-search")
generic_sampling_output = gpt2.generate_text(text, method = "generic-sampling")
top_k_sampling_output = gpt2.generate_text(text, method = "top-k-sampling")
top_p_nucleus_sampling_output = gpt2.generate_text(text, method = "top-p-nucleus-sampling")

Custom Method

Below are the default values for the parameters you may adjust to modify how the model generates text. For more information about the purpose of each parameter, please visit Hugging Face's Transformer documentation on this webpage.

max_length:
min_length:
do_sample: early_stopping: num_beams: temperature: top_k: top_p: repetition_penalty: length_penalty: no_repeat_ngram_size: bad_words_ids:

Modify All Settings

You have the ability to modify all of the default text generation parameters at once as shown below.

from chattingtransformer import ChattingGPT2

settings =  {  
  "do_sample": False,  
  "early_stopping": False,  
  "num_beams": 1,  
  "temperature": 1,  
  "top_k": 50,  
  "top_p": 1.0,  
  "repetition_penalty": 1,  
  "length_penalty": 1,  
  "no_repeat_ngram_size": 2,  
  'bad_words_ids': None,  
}
gpt2 = ChattingGPT2("gpt2")
text = "I think therefore I "

result = gpt2.generate_text(text, method = "custom", custom_settings = settings)

Modify Length

You may modify the min and max length of the output using parameters within the generate_text method.

from chattingtransformer import ChattingGPT2


gpt2 = ChattingGPT2("gpt2")
text = "I think therefore I "

result = gpt2.generate_text(text, min_length=5, max_length=500)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chattingtransformer-1.0.3.tar.gz (5.6 kB view hashes)

Uploaded Source

Built Distribution

chattingtransformer-1.0.3-py3-none-any.whl (10.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page