Skip to main content

llama-index llms everlyai integration

Project description

LlamaIndex Llms Integration: Everlyai

Installation

  1. Install the required Python packages:

    %pip install llama-index-llms-everlyai
    !pip install llama-index
    
  2. Set the EverlyAI API key as an environment variable or pass it directly to the constructor:

    import os
    
    os.environ["EVERLYAI_API_KEY"] = "<your-api-key>"
    

    Or use it directly in your Python code:

    llm = EverlyAI(api_key="your-api-key")
    

Usage

Basic Chat

To send a message and get a response (e.g., a joke):

from llama_index.llms.everlyai import EverlyAI
from llama_index.core.llms import ChatMessage

# Initialize EverlyAI with API key
llm = EverlyAI(api_key="your-api-key")

# Create a message
message = ChatMessage(role="user", content="Tell me a joke")

# Call the chat method
resp = llm.chat([message])
print(resp)

Example output:

Why don't scientists trust atoms?
Because they make up everything!

Streamed Chat

To stream a response for more dynamic conversations (e.g., storytelling):

message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])

for r in resp:
    print(r.delta, end="")

Example output (partial):

As the sun set over the horizon, a young girl named Lily sat on the beach, watching the waves roll in...

Complete Tasks

To use the complete method for simpler tasks like telling a joke:

resp = llm.complete("Tell me a joke")
print(resp)

Example output:

Why don't scientists trust atoms?
Because they make up everything!

Streamed Completion

For generating responses like stories using stream_complete:

resp = llm.stream_complete("Tell me a story in 250 words")

for r in resp:
    print(r.delta, end="")

Example output (partial):

As the sun set over the horizon, a young girl named Maria sat on the beach, watching the waves roll in...

Notes

  • Ensure the API key is set correctly before making any requests.
  • The stream_chat and stream_complete methods allow for real-time response streaming, making them ideal for dynamic and lengthy outputs like stories.

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/everlyai/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_everlyai-0.5.1.tar.gz (5.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_everlyai-0.5.1-py3-none-any.whl (4.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_everlyai-0.5.1.tar.gz.

File metadata

  • Download URL: llama_index_llms_everlyai-0.5.1.tar.gz
  • Upload date:
  • Size: 5.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_everlyai-0.5.1.tar.gz
Algorithm Hash digest
SHA256 e5713106d6c6bf016225a3d7a58b98cc38d055054081c3ae9e60ab5f6f612060
MD5 1994399b318fd3e4116714fea38f217b
BLAKE2b-256 991ac9888a8833c1e977179816db5e006137b134ca8caaa8c1a1192144e5a94f

See more details on using hashes here.

File details

Details for the file llama_index_llms_everlyai-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: llama_index_llms_everlyai-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 4.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_everlyai-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0babe020b42817afe421a8cd909353c4aed595cdccfd0c753cea8d4c5e5720c2
MD5 587d1b2c94011f9046f03fd904cb03b9
BLAKE2b-256 1e0e528b68acee94b86f42647724afed0658ac3e678588bd39d860bb50eb4d5e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page