Skip to main content

The official Python library for the anthropic-bedrock API

Project description

Anthropic Bedrock Python API library

PyPI version

The Anthropic Bedrock Python library provides convenient access to the Anthropic Bedrock REST API from any Python 3.7+ application. It includes type definitions for all request params and response fields, and offers both synchronous and asynchronous clients powered by httpx.

For the non-Bedrock Anthropic API at api.anthropic.com, see anthropic-python.

Documentation

The REST API documentation can be found on docs.anthropic.com. The full API of this library can be found in api.md.

Installation

pip install anthropic-bedrock

Usage

The full API of this library can be found in api.md.

import anthropic_bedrock
from anthropic_bedrock import AnthropicBedrock

client = AnthropicBedrock(
    # Authenticate by either providing the keys below or use the default AWS credential providers, such as
    # using ~/.aws/credentials or the "AWS_SECRET_ACCESS_KEY" and "AWS_ACCESS_KEY_ID" environment variables.
    aws_access_key="<access key>",
    aws_secret_key="<secret key>",
    # Temporary credentials can be used with aws_session_token.
    # Read more at https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html.
    aws_session_token="<session_token>",
    # aws_region changes the aws region to which the request is made. By default, we read AWS_REGION,
    # and if that's not present, we default to us-east-1. Note that we do not read ~/.aws/config for the region.
    aws_region="us-east-2",
)

completion = client.completions.create(
    model="anthropic.claude-v2:1",
    max_tokens_to_sample=256,
    prompt=f"{anthropic_bedrock.HUMAN_PROMPT} how does a court case get to the Supreme Court? {anthropic_bedrock.AI_PROMPT}",
)
print(completion.completion)

This library uses botocore internally for authentication; you can read more about the default providers here.

Async usage

Simply import AsyncAnthropicBedrock instead of AnthropicBedrock and use await with each API call:

import anthropic_bedrock
from anthropic_bedrock import AsyncAnthropicBedrock

client = AsyncAnthropicBedrock()


async def main():
    completion = await client.completions.create(
        model="anthropic.claude-v2:1",
        max_tokens_to_sample=256,
        prompt=f"{anthropic_bedrock.HUMAN_PROMPT} how does a court case get to the Supreme Court? {anthropic_bedrock.AI_PROMPT}",
    )
    print(completion.completion)


asyncio.run(main())

Functionality between the synchronous and asynchronous clients is otherwise identical.

Streaming Responses

We provide support for streaming responses using Server Side Events (SSE).

from anthropic_bedrock import AnthropicBedrock, HUMAN_PROMPT, AI_PROMPT

client = AnthropicBedrock()

stream = client.completions.create(
    prompt=f"{HUMAN_PROMPT} Your prompt here{AI_PROMPT}",
    max_tokens_to_sample=300,
    model="anthropic.claude-v2:1",
    stream=True,
)
for completion in stream:
    print(completion.completion, end="", flush=True)

The async client uses the exact same interface.

from anthropic_bedrock import AsyncAnthropicBedrock, HUMAN_PROMPT, AI_PROMPT

client = AsyncAnthropicBedrock()

stream = await client.completions.create(
    prompt=f"{HUMAN_PROMPT} Your prompt here{AI_PROMPT}",
    max_tokens_to_sample=300,
    model="anthropic.claude-v2:1",
    stream=True,
)
async for completion in stream:
    print(completion.completion, end="", flush=True)

Using types

Nested request parameters are TypedDicts. Responses are Pydantic models, which provide helper methods for things like:

  • Serializing back into JSON, model.model_dump_json(indent=2, exclude_unset=True)
  • Converting to a dictionary, model.model_dump(exclude_unset=True)

Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set python.analysis.typeCheckingMode to basic.

Token counting

You can estimate billing for a given request with the client.count_tokens() method, eg:

client = AnthropicBedrock()
client.count_tokens('Hello world!')  # 3

Handling errors

When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of anthropic_bedrock.APIConnectionError is raised.

When the API returns a non-success status code (that is, 4xx or 5xx response), a subclass of anthropic_bedrock.APIStatusError is raised, containing status_code and response properties.

All errors inherit from anthropic_bedrock.APIError.

import anthropic_bedrock
from anthropic_bedrock import AnthropicBedrock

client = AnthropicBedrock()

try:
    client.completions.create(
        prompt=f"{anthropic_bedrock.HUMAN_PROMPT} Your prompt here {anthropic_bedrock.AI_PROMPT}",
        max_tokens_to_sample=256,
        model="anthropic.claude-v2:1",
    )
except anthropic_bedrock.APIConnectionError as e:
    print("The server could not be reached")
    print(e.__cause__)  # an underlying Exception, likely raised within httpx.
except anthropic_bedrock.RateLimitError as e:
    print("A 429 status code was received; we should back off a bit.")
except anthropic_bedrock.APIStatusError as e:
    print("Another non-200-range status code was received")
    print(e.status_code)
    print(e.response)

Error codes are as followed:

Status Code Error Type
400 BadRequestError
401 AuthenticationError
403 PermissionDeniedError
404 NotFoundError
422 UnprocessableEntityError
429 RateLimitError
>=500 InternalServerError
N/A APIConnectionError

Retries

Certain errors are automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors are all retried by default.

You can use the max_retries option to configure or disable retry settings:

from anthropic_bedrock import AnthropicBedrock, HUMAN_PROMPT, AI_PROMPT

# Configure the default for all requests:
client = AnthropicBedrock(
    # default is 2
    max_retries=0,
)

# Or, configure per-request:
client.with_options(max_retries=5).completions.create(
    prompt=f"{HUMAN_PROMPT} Can you help me effectively ask for a raise at work?{AI_PROMPT}",
    max_tokens_to_sample=300,
    model="anthropic.claude-v2:1",
)

Timeouts

By default requests time out after 10 minutes. You can configure this with a timeout option, which accepts a float or an httpx.Timeout object:

from anthropic_bedrock import AnthropicBedrock, HUMAN_PROMPT, AI_PROMPT

# Configure the default for all requests:
client = AnthropicBedrock(
    # default is 10 minutes
    timeout=20.0,
)

# More granular control:
client = AnthropicBedrock(
    timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)

# Override per-request:
client.with_options(timeout=5 * 1000).completions.create(
    prompt=f"{HUMAN_PROMPT} Where can I get a good coffee in my neighbourhood?{AI_PROMPT}",
    max_tokens_to_sample=300,
    model="anthropic.claude-v2:1",
)

On timeout, an APITimeoutError is thrown.

Note that requests that time out are retried twice by default.

Advanced

Logging

We use the standard library logging module.

You can enable logging by setting the environment variable ANTHROPIC_BEDROCK_LOG to debug.

$ export ANTHROPIC_BEDROCK_LOG=debug

How to tell whether None means null or missing

In an API response, a field may be explicitly null, or missing entirely; in either case, its value is None in this library. You can differentiate the two cases with .model_fields_set:

if response.my_field is None:
  if 'my_field' not in response.model_fields_set:
    print('Got json like {}, without a "my_field" key present at all.')
  else:
    print('Got json like {"my_field": null}.')

Accessing raw response data (e.g. headers)

The "raw" Response object can be accessed by prefixing .with_raw_response. to any HTTP method call, e.g.,

from anthropic_bedrock import AnthropicBedrock, HUMAN_PROMPT, AI_PROMPT

client = AnthropicBedrock()

response = client.completions.with_raw_response.create(
    prompt=f"{HUMAN_PROMPT} Your prompt here{AI_PROMPT}",
    max_tokens_to_sample=300,
    model="anthropic.claude-v2:1",
)
print(response.headers.get('X-My-Header'))

completion = response.parse()  # get the object that `completions.create()` would have returned
print(completion.completion)

These methods return an LegacyAPIResponse object. This is a legacy class as we're changing it slightly in the next major version.

For the sync client this will mostly be the same with the exception of content & text will be methods instead of properties. In the async client, all methods will be async.

A migration script will be provided & the migration in general should be smooth.

.with_streaming_response

The above interface eagerly reads the full response body when you make the request, which may not always be what you want.

To stream the response body, use .with_streaming_response instead, which requires a context manager and only reads the response body once you call .read(), .text(), .json(), .iter_bytes(), .iter_text(), .iter_lines() or .parse(). In the async client, these are async methods.

As such, .with_streaming_response methods return a different APIResponse object, and the async client returns an AsyncAPIResponse object.

with client.completions.with_streaming_response.create(
    max_tokens_to_sample=300,
    model="claude-2.1",
    prompt=f"{HUMAN_PROMPT} Where can I get a good coffee in my neighbourhood?{AI_PROMPT}",
) as response:
    print(response.headers.get("X-My-Header"))

    for line in response.iter_lines():
        print(line)

The context manager is required so that the response will reliably be closed.

Configuring the HTTP client

You can directly override the httpx client to customize it for your use case, including:

  • Support for proxies
  • Custom transports
  • Additional advanced functionality
import httpx
from anthropic_bedrock import AnthropicBedrock

client = AnthropicBedrock(
    # Or use the `ANTHROPIC_BEDROCK_BASE_URL` env var
    base_url="http://my.test.server.example.com:8083",
    http_client=httpx.Client(
        proxies="http://my.test.proxy.example.com",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
    aws_secret_key="<secret key>",
    aws_access_key="<access key>",
    aws_region="us-east-2",
)

Managing HTTP resources

By default the library closes underlying HTTP connections whenever the client is garbage collected. You can manually close the client using the .close() method if desired, or with a context manager that closes when exiting.

Versioning

This package generally follows SemVer conventions, though certain backwards-incompatible changes may be released as minor versions:

  1. Changes that only affect static types, without breaking runtime behavior.
  2. Changes to library internals which are technically public but not intended or documented for external use. (Please open a GitHub issue to let us know if you are relying on such internals).
  3. Changes that we do not expect to impact the vast majority of users in practice.

We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.

We are keen for your feedback; please open an issue with questions, bugs, or suggestions.

Requirements

Python 3.7 or higher.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

anthropic_bedrock-0.8.0.tar.gz (798.5 kB view hashes)

Uploaded Source

Built Distribution

anthropic_bedrock-0.8.0-py3-none-any.whl (820.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page