GPT2 text generation with just two lines of code!
Project description
Chatting Transformer
Easy text generation using state of the art NLP models.
Chatting Transformer is a Python library for generating text using GPT2. GPT-2 is a language model that was developed by OpenAI that specializes in generating text. By using Chatting Transformer, you can implement and use this model with just two lines of code.
Installation
pip install chattingtransformer
Basic Usage
from chattingtransformer import ChattingGPT2
model_name = "gpt2"
gpt2 = ChattingGPT2(model_name)
text = "In 10 years, AI will "
result = gpt2.generate_text(text)
print(result) # Outputs: In 10 years, AI will have revolutionized the way we interact with the world...
Available Models
Model | Parameters | Size |
---|---|---|
gpt2 | 134 M | 548 MB |
gpt2-medium | 335 M | 1.52 GB |
gpt2-large | 774 M | 3.25 GB |
gpt2-xl | 1.5 B | 6.43 GB |
from chattingtransformer import ChattingGPT2
gpt2 = ChattingGPT2("gpt2")
gpt2_medium = ChattingGPT2("gpt2-medium")
gpt2_large = ChattingGPT2("gpt2-large")
gpt2_xl = ChattingGPT2("gpt2-xl")
Predefined Methods
Below are predfined methods that may be used to determine the output. To learn more, about these methods, please visit this webpage.
- "greedy"
- "beam-search"
- "generic-sampling"
- "top-k-sampling"
- "top-p-nucleus-sampling"
from chattingtransformer import ChattingGPT2
gpt2 = ChattingGPT2("gpt2")
text = "I think therefore I "
greedy_output = gpt2.generate_text(text, method = "greedy")
beam_search_output= gpt2.generate_text(text, method = "beam-search")
generic_sampling_output = gpt2.generate_text(text, method = "generic-sampling")
top_k_sampling_output = gpt2.generate_text(text, method = "top-k-sampling")
top_p_nucleus_sampling_output = gpt2.generate_text(text, method = "top-p-nucleus-sampling")
Custom Method
Below are the default values for the parameters you may adjust to modify how the model generates text. For more information about the purpose of each parameter, please visit Hugging Face's Transformer documentation on this webpage.
max_length:
min_length:
do_sample:
early_stopping:
num_beams:
temperature:
top_k:
top_p:
repetition_penalty:
length_penalty:
no_repeat_ngram_size:
bad_words_ids:
Modify All Settings
You have the ability to modify all of the default text generation parameters at once as shown below.
from chattingtransformer import ChattingGPT2
settings = {
"max_length": 100,
"min_length": 10,
"do_sample": False,
"early_stopping": False,
"num_beams": 1,
"temperature": 1,
"top_k": 50,
"top_p": 1.0,
"repetition_penalty": 1,
"length_penalty": 1,
"no_repeat_ngram_size": 2,
'bad_words_ids': None,
}
gpt2 = ChattingGPT2("gpt2")
text = "I think therefore I "
result = gpt2.generate_text(text, method = "custom", custom_settings = settings)
Modify Select Settings
You may only modify a subset of the settings. The rest of the parameters will use their default settings.
from chattingtransformer import ChattingGPT2
settings = {
"max_length": 200,
"min_length": 100,
}
gpt2 = ChattingGPT2("gpt2")
text = "I think therefore I "
result = gpt2.generate_text(text, method = "custom", custom_settings = settings)```
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for chattingtransformer-1.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0d6d60a433f23a6fe88c336f9d3868a8fcbfdda5baba4449e9659df5fb983ba7 |
|
MD5 | 8b093fa261622fa4f44ef404048d848c |
|
BLAKE2b-256 | 903ba1a2f442d145dd932365ff8a3e078d5b062bd384839c2c559d003c40ae17 |