Skip to main content

llama-index llms bedrock integration

Project description

LlamaIndex Llms Integration: Bedrock

Installation

%pip install llama-index-llms-bedrock
!pip install llama-index

Basic Usage

from llama_index.llms.bedrock import Bedrock

# Set your AWS profile name
profile_name = "Your aws profile name"

# Simple completion call
resp = Bedrock(
    model="amazon.titan-text-express-v1", profile_name=profile_name
).complete("Paul Graham is ")
print(resp)

# Expected output:
# Paul Graham is a computer scientist and entrepreneur, best known for co-founding
# the Silicon Valley startup incubator Y Combinator. He is also a prominent writer
# and speaker on technology and business topics...

Call chat with a list of messages

from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock import Bedrock

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]

resp = Bedrock(
    model="amazon.titan-text-express-v1", profile_name=profile_name
).chat(messages)
print(resp)

# Expected output:
# assistant: Alright, matey! Here's a story for you: Once upon a time, there was a pirate
# named Captain Jack Sparrow who sailed the seas in search of his next adventure...

Streaming

Using stream_complete endpoint

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
    print(r.delta, end="")

# Expected Output (Stream):
# Paul Graham is a computer programmer, entrepreneur, investor, and writer, best known
# for co-founding the internet firm Y Combinator...

Streaming chat

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")

# Expected Output (Stream):
# Once upon a time, there was a pirate with a colorful personality who sailed the
# high seas in search of adventure...

Configure Model

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.complete("Paul Graham is ")
print(resp)

# Expected Output:
# Paul Graham is a computer scientist, entrepreneur, investor, and writer. He co-founded
# Viaweb, the first commercial web browser...

Connect to Bedrock with Access Keys

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(
    model="amazon.titan-text-express-v1",
    aws_access_key_id="AWS Access Key ID to use",
    aws_secret_access_key="AWS Secret Access Key to use",
    aws_session_token="AWS Session Token to use",
    region_name="AWS Region to use, e.g. us-east-1",
)

resp = llm.complete("Paul Graham is ")
print(resp)

# Expected Output:
# Paul Graham is an American computer scientist, entrepreneur, investor, and author,
# best known for co-founding Viaweb, the first commercial web browser...

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/bedrock/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_bedrock-0.3.0.tar.gz (9.1 kB view details)

Uploaded Source

Built Distribution

llama_index_llms_bedrock-0.3.0-py3-none-any.whl (9.5 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_bedrock-0.3.0.tar.gz.

File metadata

  • Download URL: llama_index_llms_bedrock-0.3.0.tar.gz
  • Upload date:
  • Size: 9.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_llms_bedrock-0.3.0.tar.gz
Algorithm Hash digest
SHA256 70aa1026b68dd6bb7a4aae0581a250835e8dfe8095cecf285a165fc28fc61292
MD5 df755f028e25711e0460c9917c4fbae5
BLAKE2b-256 9d0b79a60e136c69c9c47b8d27387eff559579691642671f662af3d3f03cfde8

See more details on using hashes here.

File details

Details for the file llama_index_llms_bedrock-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_bedrock-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8e1a22ae68a6f4c54ee63d2d4ba9115f13407944a76e6fc4976be1b921b559f6
MD5 80e76d4cfa167625e164f1e0f12e0161
BLAKE2b-256 a383de2be092ad048850d9e302e8eb1c25b19a5f2fda7f3758ba81e0742b3498

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page