Skip to main content

A Python LLM framework for interacting with AWS Bedrock services. Build on top of boto3 library. This library serves as an fast prototyping, building POC, production ready.

Project description

Bedrock LLM

A Python library for building LLM applications using Amazon Bedrock Provider boto3 library. it aim to fast prototyping using variety of Foundation Model from Amazon Bedrock. It also aim to easy integration Amazon Bedrock Foundation Model with other services.

The library is crafted to create best practices, production ready on Anthropic Model Family, Llama, Amazon Titan Text, MistralAI and AI21.

Conceptual Architecture

Features

  • Support for Retrieval-Augmented Generation (RAG)
  • Support for Agent-based interactions
  • Support for Multi-Agent systems (in progress)
  • Support for creating workflows, nodes, and event-based systems (coming soon)
  • Support for image generated model and speech to text (STT), text to speech (TTS) (coming soon)
  • Performance monitoring for both asynchronous and synchronous functions
  • Logging functionality for tracking function calls, inputs, and outputs

Installation

You can install the Bedrock LLM library using pip:

pip install bedrock-llm

This library requires Python 3.9 or later.

Usage

Here's a quick example of how to use the Bedrock LLM library:

Simple text generation

from bedrock_llm import LLMClient, ModelName, ModelConfig

# Create a LLM client
client = LLMClient(
    region_name="us-east-1",
    model_name=ModelName.MISTRAL_7B
)

# Create a configuration for inference parameters
config = ModelConfig(
    temperature=0.1,
    top_p=0.9,
    max_tokens=512
)

# Create a prompt
prompt = "Who are you?"

# Invoke the model and get results
response, stop_reason = client.generate(config, prompt)

# Print out the results
cprint(response.content, "green")
cprint(stop_reason, "red")

Simple tool calling

from bedrock_llm import Agent, ModelName
from bedrock_llm.schema.tools import ToolMetadata, InputSchema, PropertyAttr

agent = Agent(
    region_name="us-east-1",
    model_name=ModelName.CLAUDE_3_5_HAIKU
)

# Define the tool description for the model
get_weather_tool = ToolMetadata(
    name="get_weather",
    description="Get the weather in specific location",
    input_schema=InputSchema(
        type="object",
        properties={
            "location": PropertyAttr(
                type="string",
                description="Location to search for, example: New York, WashingtonDC, ..."
            )
        },
        required=["location"]
    )
)

# Define the tool
@Agent.tool(get_weather_tool)
async def get_weather(location: str):
    return f"{location} is 20*C"


async def main():
    prompt = input("User: ")

    async for token, stop_reason, response, tool_result in agent.generate_and_action_async(
        prompt=prompt,
        tools=["get_weather"]
    ):
        if token:
            print(token, end="", flush=True)
        if stop_reason:
            print(f"\n{stop_reason}")


if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Monitoring and Logging

The monitor decorators for monitoring the performance of both asynchronous and synchronous functions:

from bedrock_llm.monitor import Monitor

@monitor_async
async def my_async_function():
    # Your async function code here

@monitor_sync
def my_sync_function():
    # Your sync function code here

The log decorators for logging function calls, inputs, and outputs:

from bedrock_llm.monitor import Logging

@log_async
async def my_async_function():
    # Your async function code here

@log_sync
def my_sync_function():
    # Your sync function code here

These decorators are optimized for minimal performance impact on your application.

More examples

For more detailed usage instructions and API documentation, please refer to our documentation.

You can also see some examples of how to use and build LLM flow using the libary

and more to come, we are working on it :)

Requirements

  • Python 3.9+
  • pydantic>=2.0.0
  • boto3>=1.18.0
  • botocore>=1.21.0
  • jinja>=3.1.4

Contributing

We welcome contributions! Please see our contributing guidelines for more details.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bedrock_llm-0.1.4.tar.gz (56.9 kB view details)

Uploaded Source

Built Distribution

bedrock_llm-0.1.4-py3-none-any.whl (79.9 kB view details)

Uploaded Python 3

File details

Details for the file bedrock_llm-0.1.4.tar.gz.

File metadata

  • Download URL: bedrock_llm-0.1.4.tar.gz
  • Upload date:
  • Size: 56.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.2

File hashes

Hashes for bedrock_llm-0.1.4.tar.gz
Algorithm Hash digest
SHA256 e97fa0980bb66a324d2f0e071584231b6d90206949f6ce83669c1a4cf6ac273e
MD5 cae1a1de663c867d21b9b85aa32d3f0b
BLAKE2b-256 fdd7ed4a3e6df185d2b523a085a4b28143d74c665427d4c048f953303b760ace

See more details on using hashes here.

File details

Details for the file bedrock_llm-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: bedrock_llm-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 79.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.2

File hashes

Hashes for bedrock_llm-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 f29f0a78a352f30ee13f420a74db056796ff2d0da73cf8e3c2f4374c87c60608
MD5 1f262471645e8c728dcfb97375b55f22
BLAKE2b-256 79fe779e463e7bcda419aedccdfb0d0076742a7720a51cfb221c2b3d0df7eb27

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page