Skip to main content

llama-index llms anyscale integration

Project description

LlamaIndex Llms Integration: Anyscale

Installation

%pip install llama-index-llms-anyscale
!pip install llama-index

Basic Usage

from llama_index.llms.anyscale import Anyscale
from llama_index.core.llms import ChatMessage

# Call chat with ChatMessage List
# You need to either set env var ANYSCALE_API_KEY or set api_key in the class constructor

# Example of setting API key through environment variable
# import os
# os.environ['ANYSCALE_API_KEY'] = '<your-api-key>'

# Initialize the Anyscale LLM with your API key
llm = Anyscale(api_key="<your-api-key>")

# Chat Example
message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)

# Expected Output:
# assistant: Sure, here's a joke for you:
#
# Why couldn't the bicycle stand up by itself?
#
# Because it was two-tired!
#
# I hope that brought a smile to your face! Is there anything else I can assist you with?

Streaming Example

message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
    print(r.delta, end="")

# Output Example:
# Once upon a time, there was a young girl named Maria who lived in a small village surrounded by lush green forests.
# Maria was a kind and gentle soul, loved by everyone in the village. She spent most of her days exploring the forests,
# discovering new species of plants and animals, and helping the villagers with their daily chores...
# (Story continues until it reaches the word limit.)

Completion Example

resp = llm.complete("Tell me a joke")
print(resp)

# Expected Output:
# assistant: Sure, here's a joke for you:
#
# Why couldn't the bicycle stand up by itself?
#
# Because it was two-tired!

Streaming Completion Example

resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
    print(r.delta, end="")

# Example Output:
# Once upon a time, there was a young girl named Maria who lived in a small village...
# (Stream continues as the story is generated.)

Model Configuration

llm = Anyscale(model="codellama/CodeLlama-34b-Instruct-hf")
resp = llm.complete("Show me the c++ code to send requests to HTTP Server")
print(resp)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/anyscale/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_anyscale-0.4.1.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_anyscale-0.4.1-py3-none-any.whl (5.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_anyscale-0.4.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_anyscale-0.4.1.tar.gz
Algorithm Hash digest
SHA256 4e52b58e872bf96fed17582e8d818549aad68f5d25e74eb0b0e844f1557ee5ca
MD5 10f04b0a9033ad52ff71eaf4f8a97154
BLAKE2b-256 b39bc2365e5ee51ced72532107ea5b62ca277fe4c7a37c6b1fa8fb065e6f60f5

See more details on using hashes here.

File details

Details for the file llama_index_llms_anyscale-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_anyscale-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 56b7086ef93fe0400d6bd1cfb9e12cfaad15ab8681aade40aa58ba8cdeab823d
MD5 1079bbdfcb0909c7fe0e4e3763e4367a
BLAKE2b-256 7f5fb6a5bc45334a157cef956d82348c2347817a378f8c11e52f8cb44ae55b55

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page