llama-index llms anyscale integration
Project description
LlamaIndex Llms Integration: Anyscale
Installation
%pip install llama-index-llms-anyscale
!pip install llama-index
Basic Usage
from llama_index.llms.anyscale import Anyscale
from llama_index.core.llms import ChatMessage
# Call chat with ChatMessage List
# You need to either set env var ANYSCALE_API_KEY or set api_key in the class constructor
# Example of setting API key through environment variable
# import os
# os.environ['ANYSCALE_API_KEY'] = '<your-api-key>'
# Initialize the Anyscale LLM with your API key
llm = Anyscale(api_key="<your-api-key>")
# Chat Example
message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)
# Expected Output:
# assistant: Sure, here's a joke for you:
#
# Why couldn't the bicycle stand up by itself?
#
# Because it was two-tired!
#
# I hope that brought a smile to your face! Is there anything else I can assist you with?
Streaming Example
message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
print(r.delta, end="")
# Output Example:
# Once upon a time, there was a young girl named Maria who lived in a small village surrounded by lush green forests.
# Maria was a kind and gentle soul, loved by everyone in the village. She spent most of her days exploring the forests,
# discovering new species of plants and animals, and helping the villagers with their daily chores...
# (Story continues until it reaches the word limit.)
Completion Example
resp = llm.complete("Tell me a joke")
print(resp)
# Expected Output:
# assistant: Sure, here's a joke for you:
#
# Why couldn't the bicycle stand up by itself?
#
# Because it was two-tired!
Streaming Completion Example
resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
print(r.delta, end="")
# Example Output:
# Once upon a time, there was a young girl named Maria who lived in a small village...
# (Stream continues as the story is generated.)
Model Configuration
llm = Anyscale(model="codellama/CodeLlama-34b-Instruct-hf")
resp = llm.complete("Show me the c++ code to send requests to HTTP Server")
print(resp)
LLM Implementation example
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_llms_anyscale-0.3.0.tar.gz
.
File metadata
- Download URL: llama_index_llms_anyscale-0.3.0.tar.gz
- Upload date:
- Size: 4.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 187fddd7ba54aa929d19976b77f64fc370b4cfe6654316474e5644899384b290 |
|
MD5 | 77a604465e0cd0745d8b12aea3fea727 |
|
BLAKE2b-256 | 10b00dd5f3d598eb82527dcdfb80f1f98591b2a637b8c40732d43bb96fe0e638 |
File details
Details for the file llama_index_llms_anyscale-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: llama_index_llms_anyscale-0.3.0-py3-none-any.whl
- Upload date:
- Size: 5.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 90655064f3ed05efa7f053ee9ddeb2dff9d39d20ef2a250e46045acb60e7e020 |
|
MD5 | c0b05e5d39a6a34b57a92f64fad9b4bd |
|
BLAKE2b-256 | 36e1c72804bf5f7f12abc8b0eb553c2a9019a63107d66d589e026b5e5694e687 |