Skip to main content

llama-index llms everlyai integration

Project description

LlamaIndex Llms Integration: Everlyai

Installation

  1. Install the required Python packages:

    %pip install llama-index-llms-everlyai
    !pip install llama-index
    
  2. Set the EverlyAI API key as an environment variable or pass it directly to the constructor:

    import os
    
    os.environ["EVERLYAI_API_KEY"] = "<your-api-key>"
    

    Or use it directly in your Python code:

    llm = EverlyAI(api_key="your-api-key")
    

Usage

Basic Chat

To send a message and get a response (e.g., a joke):

from llama_index.llms.everlyai import EverlyAI
from llama_index.core.llms import ChatMessage

# Initialize EverlyAI with API key
llm = EverlyAI(api_key="your-api-key")

# Create a message
message = ChatMessage(role="user", content="Tell me a joke")

# Call the chat method
resp = llm.chat([message])
print(resp)

Example output:

Why don't scientists trust atoms?
Because they make up everything!

Streamed Chat

To stream a response for more dynamic conversations (e.g., storytelling):

message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])

for r in resp:
    print(r.delta, end="")

Example output (partial):

As the sun set over the horizon, a young girl named Lily sat on the beach, watching the waves roll in...

Complete Tasks

To use the complete method for simpler tasks like telling a joke:

resp = llm.complete("Tell me a joke")
print(resp)

Example output:

Why don't scientists trust atoms?
Because they make up everything!

Streamed Completion

For generating responses like stories using stream_complete:

resp = llm.stream_complete("Tell me a story in 250 words")

for r in resp:
    print(r.delta, end="")

Example output (partial):

As the sun set over the horizon, a young girl named Maria sat on the beach, watching the waves roll in...

Notes

  • Ensure the API key is set correctly before making any requests.
  • The stream_chat and stream_complete methods allow for real-time response streaming, making them ideal for dynamic and lengthy outputs like stories.

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/everlyai/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_everlyai-0.2.1.tar.gz (3.5 kB view hashes)

Uploaded Source

Built Distribution

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page