Skip to main content

This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.

Project description

transformers-stream-generator

PyPI - Python Version PyPI GitHub license badge Blog

Description

This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.

Web Demo

  • original
  • stream

Installation

pip install transformers-stream-generator

Usage

  1. just add two lines of code before your original code
from transformers_stream_generator import init_stream_support
init_stream_support()
  1. add do_stream=True in model.generate function and keep do_sample=True, then you can get a generator
generator = model.generate(input_ids, do_stream=True, do_sample=True)
for token in generator:
    word = tokenizer.decode(token)
    print(word)

Example

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

transformers-stream-generator-0.0.5.tar.gz (13.0 kB view details)

Uploaded Source

File details

Details for the file transformers-stream-generator-0.0.5.tar.gz.

File metadata

File hashes

Hashes for transformers-stream-generator-0.0.5.tar.gz
Algorithm Hash digest
SHA256 271deace0abf9c0f83b36db472c8ba61fdc7b04d1bf89d845644acac2795ed57
MD5 069ae3115525fa148d88af8f01772ee2
BLAKE2b-256 42c265f13aec253100e1916e9bd7965fe17bde796ebabeb1265f45191ab4ddc0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page