llama-index llms everlyai integration
Project description
LlamaIndex Llms Integration: Everlyai
Installation
-
Install the required Python packages:
%pip install llama-index-llms-everlyai !pip install llama-index
-
Set the EverlyAI API key as an environment variable or pass it directly to the constructor:
import os os.environ["EVERLYAI_API_KEY"] = "<your-api-key>"
Or use it directly in your Python code:
llm = EverlyAI(api_key="your-api-key")
Usage
Basic Chat
To send a message and get a response (e.g., a joke):
from llama_index.llms.everlyai import EverlyAI
from llama_index.core.llms import ChatMessage
# Initialize EverlyAI with API key
llm = EverlyAI(api_key="your-api-key")
# Create a message
message = ChatMessage(role="user", content="Tell me a joke")
# Call the chat method
resp = llm.chat([message])
print(resp)
Example output:
Why don't scientists trust atoms?
Because they make up everything!
Streamed Chat
To stream a response for more dynamic conversations (e.g., storytelling):
message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
print(r.delta, end="")
Example output (partial):
As the sun set over the horizon, a young girl named Lily sat on the beach, watching the waves roll in...
Complete Tasks
To use the complete
method for simpler tasks like telling a joke:
resp = llm.complete("Tell me a joke")
print(resp)
Example output:
Why don't scientists trust atoms?
Because they make up everything!
Streamed Completion
For generating responses like stories using stream_complete
:
resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
print(r.delta, end="")
Example output (partial):
As the sun set over the horizon, a young girl named Maria sat on the beach, watching the waves roll in...
Notes
- Ensure the API key is set correctly before making any requests.
- The
stream_chat
andstream_complete
methods allow for real-time response streaming, making them ideal for dynamic and lengthy outputs like stories.
LLM Implementation example
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_llms_everlyai-0.3.0.tar.gz
.
File metadata
- Download URL: llama_index_llms_everlyai-0.3.0.tar.gz
- Upload date:
- Size: 3.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 40aa5d16c68e56533dacf444e3d73deab466142852f9e39734debb529abbc919 |
|
MD5 | dbc09262b2e0668db8d1b42b058fc726 |
|
BLAKE2b-256 | f9a4c72e673c9a506d749a80723305c0def7c316ed49128dbce3c532da83eb3a |
File details
Details for the file llama_index_llms_everlyai-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: llama_index_llms_everlyai-0.3.0-py3-none-any.whl
- Upload date:
- Size: 4.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9c14fbf7639262fe452fbbcbfdcaaa901387d191ca40b441cb280f915126540b |
|
MD5 | aa68e00aa67ff477d4e009ffc2ad54a4 |
|
BLAKE2b-256 | c4905a38d093a5e7afaf350b16ca9bc57ca5ea2a653a279eb9cc2b120755a1d8 |