Hugging Face Text Generation Python Client
Project description
Text Generation
The Hugging Face Text Generation Python library provides a convenient way of interfacing with a
text-generation-inference
instance running on your own infrastructure or on the Hugging Face Hub.
Get Started
Install
pip install text-generation
Usage
from text_generation import InferenceAPIClient
client = InferenceAPIClient("bigscience/bloomz")
text = client.generate("Why is the sky blue?").generated_text
print(text)
# ' Rayleigh scattering'
# Token Streaming
text = ""
for response in client.generate_stream("Why is the sky blue?"):
if not response.token.special:
text += response.token.text
print(text)
# ' Rayleigh scattering'
or with the asynchronous client:
from text_generation import InferenceAPIAsyncClient
client = InferenceAPIAsyncClient("bigscience/bloomz")
response = await client.generate("Why is the sky blue?")
print(response.generated_text)
# ' Rayleigh scattering'
# Token Streaming
text = ""
async for response in client.generate_stream("Why is the sky blue?"):
if not response.token.special:
text += response.token.text
print(text)
# ' Rayleigh scattering'
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
text-generation-0.1.0.tar.gz
(6.4 kB
view hashes)
Built Distribution
Close
Hashes for text_generation-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c6bc9be95a9377869a99e4c202bd0c9128e855ce0caecf47a61083d226870b37 |
|
MD5 | b6b434a2f6361339add45609a953bcf5 |
|
BLAKE2b-256 | d71047112b447ba05117e7d4d42dd465fcdea32538e45c0bbc847869cbb19027 |