llama-index llms bedrock integration
Project description
LlamaIndex Llms Integration: Bedrock
Installation
%pip install llama-index-llms-bedrock
!pip install llama-index
Basic Usage
from llama_index.llms.bedrock import Bedrock
# Set your AWS profile name
profile_name = "Your aws profile name"
# Simple completion call
resp = Bedrock(
model="amazon.titan-text-express-v1", profile_name=profile_name
).complete("Paul Graham is ")
print(resp)
# Expected output:
# Paul Graham is a computer scientist and entrepreneur, best known for co-founding
# the Silicon Valley startup incubator Y Combinator. He is also a prominent writer
# and speaker on technology and business topics...
Call chat with a list of messages
from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock import Bedrock
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="Tell me a story"),
]
resp = Bedrock(
model="amazon.titan-text-express-v1", profile_name=profile_name
).chat(messages)
print(resp)
# Expected output:
# assistant: Alright, matey! Here's a story for you: Once upon a time, there was a pirate
# named Captain Jack Sparrow who sailed the seas in search of his next adventure...
Streaming
Using stream_complete endpoint
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
# Expected Output (Stream):
# Paul Graham is a computer programmer, entrepreneur, investor, and writer, best known
# for co-founding the internet firm Y Combinator...
Streaming chat
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
# Expected Output (Stream):
# Once upon a time, there was a pirate with a colorful personality who sailed the
# high seas in search of adventure...
Configure Model
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.complete("Paul Graham is ")
print(resp)
# Expected Output:
# Paul Graham is a computer scientist, entrepreneur, investor, and writer. He co-founded
# Viaweb, the first commercial web browser...
Connect to Bedrock with Access Keys
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(
model="amazon.titan-text-express-v1",
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, e.g. us-east-1",
)
resp = llm.complete("Paul Graham is ")
print(resp)
# Expected Output:
# Paul Graham is an American computer scientist, entrepreneur, investor, and author,
# best known for co-founding Viaweb, the first commercial web browser...
LLM Implementation example
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for llama_index_llms_bedrock-0.2.5.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 660b534b9e3ec45b0146ee5421e24d1eac02cc9743c5cd5f4e3f50bdd664c9b2 |
|
MD5 | 986ea3fa13940b064bda2d5e50231fb3 |
|
BLAKE2b-256 | 9cc9c75eff050e5c8683109126d140564eaa101a8458b001d89cbbdbb79d3c11 |
Close
Hashes for llama_index_llms_bedrock-0.2.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 97962b0e9e97ca50772f4948dfd51c97061df49817667ce39b73dfac2e6ff55c |
|
MD5 | 9dec9fae99e09bbf8156af4d502feb3e |
|
BLAKE2b-256 | 17acfa70a292d3b9f3031d9540ba3d130ad6e232f8026789d1f40debf6e6389f |