Skip to main content

llama-index llms bedrock integration

Project description

LlamaIndex Llms Integration: Bedrock

Installation

%pip install llama-index-llms-bedrock
!pip install llama-index

Basic Usage

from llama_index.llms.bedrock import Bedrock

# Set your AWS profile name
profile_name = "Your aws profile name"

# Simple completion call
resp = Bedrock(
    model="amazon.titan-text-express-v1", profile_name=profile_name
).complete("Paul Graham is ")
print(resp)

# Expected output:
# Paul Graham is a computer scientist and entrepreneur, best known for co-founding
# the Silicon Valley startup incubator Y Combinator. He is also a prominent writer
# and speaker on technology and business topics...

Call chat with a list of messages

from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock import Bedrock

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]

resp = Bedrock(
    model="amazon.titan-text-express-v1", profile_name=profile_name
).chat(messages)
print(resp)

# Expected output:
# assistant: Alright, matey! Here's a story for you: Once upon a time, there was a pirate
# named Captain Jack Sparrow who sailed the seas in search of his next adventure...

Streaming

Using stream_complete endpoint

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
    print(r.delta, end="")

# Expected Output (Stream):
# Paul Graham is a computer programmer, entrepreneur, investor, and writer, best known
# for co-founding the internet firm Y Combinator...

Streaming chat

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")

# Expected Output (Stream):
# Once upon a time, there was a pirate with a colorful personality who sailed the
# high seas in search of adventure...

Configure Model

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.complete("Paul Graham is ")
print(resp)

# Expected Output:
# Paul Graham is a computer scientist, entrepreneur, investor, and writer. He co-founded
# Viaweb, the first commercial web browser...

Connect to Bedrock with Access Keys

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(
    model="amazon.titan-text-express-v1",
    aws_access_key_id="AWS Access Key ID to use",
    aws_secret_access_key="AWS Secret Access Key to use",
    aws_session_token="AWS Session Token to use",
    region_name="AWS Region to use, e.g. us-east-1",
)

resp = llm.complete("Paul Graham is ")
print(resp)

# Expected Output:
# Paul Graham is an American computer scientist, entrepreneur, investor, and author,
# best known for co-founding Viaweb, the first commercial web browser...

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/bedrock/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_bedrock-0.2.6.tar.gz (9.0 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_llms_bedrock-0.2.6.tar.gz.

File metadata

  • Download URL: llama_index_llms_bedrock-0.2.6.tar.gz
  • Upload date:
  • Size: 9.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-1025-azure

File hashes

Hashes for llama_index_llms_bedrock-0.2.6.tar.gz
Algorithm Hash digest
SHA256 43183fabff1db584b824e6c7af9650cd186bf43ea0261d48f50b9993374de493
MD5 c170d82c8f9cff6de4986c0157f499b9
BLAKE2b-256 1f923b0116fb45d8c900e9156bc683b886e190e239f9127f0a4bf176b2ddf31e

See more details on using hashes here.

File details

Details for the file llama_index_llms_bedrock-0.2.6-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_bedrock-0.2.6-py3-none-any.whl
Algorithm Hash digest
SHA256 faebbf12c55e73694771fa2cfb6a91a290a214a6734529d1ff34d6d44dce1d97
MD5 63a863f3cbaf4bb0a9c39cb0fe38135e
BLAKE2b-256 25fc83fadce0b364d50bed5e1cc2ceb62ed56c7588394b2c895104c2658f4f94

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page