Skip to main content

llama-index llms huggingface text generation inference integration

Project description

LlamaIndex Llms Integration: Text Generation Inference

Integration with Text Generation Inference from Hugging Face to generate text.

Installation

pip install llama-index-llms-text-generation-inference

Usage

from llama_index.llms.text_generation_inference import TextGenerationInference

llm = TextGenerationInference(
    model_name="openai-community/gpt2",
    temperature=0.7,
    max_tokens=100,
    token="<your-token>",  # Optional
)

response = llm.complete("Hello, how are you?")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page