Skip to main content

llama-index llms anyscale integration

Project description

LlamaIndex Llms Integration: Anyscale

Installation

%pip install llama-index-llms-anyscale
!pip install llama-index

Basic Usage

from llama_index.llms.anyscale import Anyscale
from llama_index.core.llms import ChatMessage

# Call chat with ChatMessage List
# You need to either set env var ANYSCALE_API_KEY or set api_key in the class constructor

# Example of setting API key through environment variable
# import os
# os.environ['ANYSCALE_API_KEY'] = '<your-api-key>'

# Initialize the Anyscale LLM with your API key
llm = Anyscale(api_key="<your-api-key>")

# Chat Example
message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)

# Expected Output:
# assistant: Sure, here's a joke for you:
#
# Why couldn't the bicycle stand up by itself?
#
# Because it was two-tired!
#
# I hope that brought a smile to your face! Is there anything else I can assist you with?

Streaming Example

message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
    print(r.delta, end="")

# Output Example:
# Once upon a time, there was a young girl named Maria who lived in a small village surrounded by lush green forests.
# Maria was a kind and gentle soul, loved by everyone in the village. She spent most of her days exploring the forests,
# discovering new species of plants and animals, and helping the villagers with their daily chores...
# (Story continues until it reaches the word limit.)

Completion Example

resp = llm.complete("Tell me a joke")
print(resp)

# Expected Output:
# assistant: Sure, here's a joke for you:
#
# Why couldn't the bicycle stand up by itself?
#
# Because it was two-tired!

Streaming Completion Example

resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
    print(r.delta, end="")

# Example Output:
# Once upon a time, there was a young girl named Maria who lived in a small village...
# (Stream continues as the story is generated.)

Model Configuration

llm = Anyscale(model="codellama/CodeLlama-34b-Instruct-hf")
resp = llm.complete("Show me the c++ code to send requests to HTTP Server")
print(resp)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/anyscale/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_anyscale-0.3.1.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_anyscale-0.3.1-py3-none-any.whl (5.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_anyscale-0.3.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_anyscale-0.3.1.tar.gz
Algorithm Hash digest
SHA256 25493f9ce19a35f9d0888086c3dea2571abb516cd36c6b1c2ed2f06d4e51021e
MD5 6d6438d17846810eb38133b2a9eef6c6
BLAKE2b-256 ab97dfcbd5b5298de27099daf85ae850815c53f6e62593eea1b4cdbcc33d8a65

See more details on using hashes here.

File details

Details for the file llama_index_llms_anyscale-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_anyscale-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 363d3931d51ec466a189b82233a65322973e4b073cfdcf9d288fe5d105b3e16f
MD5 d4d1b49cf4face03198e54ca980c1eb9
BLAKE2b-256 43df970450c34d42d5687fa0e540a32ab9d9bd718c5dd50af77650822ea60dbe

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page