Skip to main content

No project description provided

Project description

Turbo Text Transformer

Turbo Text Transformer is a Python command-line tool for generating text using OpenAI's GPT-3 and other models. It includes a modular model system that allows for easy integration of new models and customization of existing ones.

Best used in combination with the Turbo Text Transformer Prompts repository!

Configuration

Configs are in the .config folder, put your api key in there

api_key: sk-<your api key here>
engine_params:
  frequency_penalty: 0
  logprobs: null
  max_tokens: 1000
  model: davinci
  n: 4
  presence_penalty: 0
  stop: null
  temperature: 0.9
  top_p: 1
models:
- babbage
- davinci
- gpt-3.5-turbo-0301
- text-davinci-003
etc.

The default config will be generated when you first try to use it.

Installation

To install Turbo Text Transformer, you can use pip:

pip install turbo-text-transformer

or clone the repository and install it manually:

git clone https://github.com/fergusfettes/turbo-text-transformer.git
cd turbo-text-transformer
poetry install

Usage

ttt [OPTIONS] PROMPT
# or
cat file.txt | ttt [OPTIONS]
# or
ttt [OPTIONS]
# then paste into stdin

The above example will generate text using the davinci model and the prompt "Hello, GPT-3!".

Options

There are several options you can use with the ttt command:

  • --model or -m: The name of the model to use. Default is "davinci".
  • --list_models or -l: List available models.
  • --echo_prompt, -e: Whether to echo the prompt in the output.
  • --format, -f FORMAT: The format of the output. Can be "clean", "json", or "logprobs". Defaults to "clean".
  • --number, -n NUMBER: The number of completions to generate. Defaults to 1.
  • --logprobs, -L LOGPROBS: Whether to show logprobs for each completion. Defaults to False.
  • --max_tokens, -M MAX_TOKENS: The maximum number of tokens to return. Defaults to None.

Configuration

Before using Turbo Text Transformer, you need to set up a configuration file. This should happen automatically when you run the ttt command for the first time.

This will create a configuration file in your home directory. You'll also be prompted to enter API keys for the transformer models you want to use. See the documentation for each model to learn how to obtain an API key.

Examples

Here are some examples of how to use Turbo Text Transformer:

# Generate text with the default model
ttt "Once upon a time, there was a"

# Generate text with a specific model
ttt -m text-davinci-003 "The meaning of life is"

# Generate multiple completions
ttt -n 5 "I like to eat"

# Show logprobs
ttt "I like to eat" -f logprobs

# Use the JSON format
ttt -f json "I like to eat"

If you put in the 'logprobs' flag, it will try to color the terminal output based on the logprobs. This is a bit janky at the moment.

You can also tell it to output a formatted json file with the -f json flag. This is useful for piping into other programs.

ttt -f json "The cat sat on the"

and you can pipe txt in-- for example, I generated this readme with the following command:

cat pyproject.toml ttt/__main__.py | tttp -f readme | ttt -m gpt-3.5-turbo -f clear > README.md

If you want to input more text freely, just use it without a prompt and you can write or paste directly into stdin.

Models

Turbo Text Transformer includes support for text generation with all the openai models. Have a look at the model list with ttt -l.

Contributing

If you find a bug or would like to contribute to Turbo Text Transformer, please create a new GitHub issue or pull request.

License

Turbo Text Transformer is released under the MIT License. See LICENSE for more information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

turbo_text_transformer-0.1.2.tar.gz (9.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

turbo_text_transformer-0.1.2-py3-none-any.whl (9.8 kB view details)

Uploaded Python 3

File details

Details for the file turbo_text_transformer-0.1.2.tar.gz.

File metadata

  • Download URL: turbo_text_transformer-0.1.2.tar.gz
  • Upload date:
  • Size: 9.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.3.2 CPython/3.10.6 Linux/5.15.0-60-generic

File hashes

Hashes for turbo_text_transformer-0.1.2.tar.gz
Algorithm Hash digest
SHA256 c7568cc9c4484350bcb230a5a4db2b987f78c6b70ddc65ff7f8654aac81dc5c2
MD5 d761addf8a108b0b59780da5577902e3
BLAKE2b-256 9f5434f88fa067c65092e4ae63630daeec9dd921d12a90aa4bf677a5689f9879

See more details on using hashes here.

File details

Details for the file turbo_text_transformer-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for turbo_text_transformer-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2e638ba14f4c394336200a73b548a54b1c91722dc440fb0d0f198006b1d85f37
MD5 c72d50eb32a7a1ddba0b1a83b35342c4
BLAKE2b-256 19bfd47f8d9d640fc1d57754a9a0c95dc1f99e7048d849a242876514d27ced8b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page