Skip to main content

Integrate Amberflo into any Python 3 application.

Project description

amberflo-metering-python

CI Status PyPI

Amberflo is the simplest way to integrate metering into your application.

This is the official Python 3 client that wraps the Amberflo REST API.

:heavy_check_mark: Features

  • Add and update customers
  • Assign and update product plans to customers
  • List invoices of a customer
  • Get a new customer portal session for a customer
  • Add and list prepaid orders to customers
  • Send meter events
    • In asynchronous batches for high throughput (with optional flush on demand)
    • Or synchronously
    • Using the Amberflo API or the Amberflo supplied AWS S3 bucket
  • Query usage
  • Fine grained logging control

:rocket: Quick Start

  1. Sign up for free and get an API key.

  2. Install the SDK

pip install amberflo-metering-python
  1. Create a customer
import os
from metering.customer import CustomerApiClient, create_customer_payload

client = CustomerApiClient(os.environ.get("API_KEY"))

message = create_customer_payload(
    customer_id="sample-customer-123",
    customer_email="customer-123@sample.com",
    customer_name="Sample Customer",
    traits={
        "region": "us-east-1",
    },
)
customer = client.add_or_update(message)
  1. Ingest meter events
import os
from time import time
from metering.ingest import create_ingest_client

client = create_ingest_client(api_key=os.environ["API_KEY"])

dimensions = {"region": "us-east-1"}
customer_id = "sample-customer-123"

client.meter(
    meter_api_name="sample-meter",
    meter_value=5,
    meter_time_in_millis=int(time() * 1000),
    customer_id=customer_id,
    dimensions=dimensions,
)
  1. Query usage
import os
from time import time
from metering.usage import (AggregationType, Take, TimeGroupingInterval,
                            TimeRange, UsageApiClient, create_usage_query)

client = UsageApiClient(os.environ.get("API_KEY"))

since_two_days_ago = TimeRange(int(time()) - 60 * 60 * 24 * 2)

query = create_usage_query(
    meter_api_name="my_meter",
    aggregation=AggregationType.SUM,
    time_grouping_interval=TimeGroupingInterval.DAY,
    time_range=since_two_days_ago,
    group_by=["customerId"],
    usage_filter={"customerId": ["some-customer-321", "sample-customer-123"]},
    take=Take(limit=10, is_ascending=False),
)
report = client.get(query)

:zap: High throughput ingestion

Amberflo.io libraries are built to support high throughput environments. That means you can safely send hundreds of meter records per second. For example, you can chose to deploy it on a web server that is serving hundreds of requests per second.

However, every call does not result in a HTTP request, but is queued in memory instead. Messages are batched and flushed in the background, allowing for much faster operation. The size of batch and rate of flush can be customized.

Flush on demand: For example, at the end of your program, you'll want to flush to make sure there's nothing left in the queue. Calling this method will block the calling thread until there are no messages left in the queue. So, you'll want to use it as part of your cleanup scripts and avoid using it as part of the request lifecycle.

Error handling: The SDK allows you to set up a on_error callback function for handling errors when trying to send a batch.

Here is a complete example, showing the default values of all options:

def on_error_callback(error, batch):
    ...

client = create_ingest_client(
    api_key=API_KEY,
    max_queue_size=100000,  # max number of items in the queue before rejecting new items
    threads=2,  # number of worker threads doing the sending
    retries=2,  # max number of retries after failures
    batch_size=100,  # max number of meter records in a batch
    send_interval_in_secs=0.5,  # wait time before sending an incomplete batch
    sleep_interval_in_secs=0.1,  # wait time after failure to send or queue empty
    on_error=on_error_callback,  # handle failures to send a batch
)

...

client.meter(...)

client.flush()  # block and make sure all messages are sent

What happens if there are just too many messages?

If the module detects that it can't flush faster than it's receiving messages, it'll simply stop accepting new messages. This allows your program to continually run without ever crashing due to a backed up metering queue.

Ingesting through the S3 bucket

The SDK provides a metering.ingest.IngestS3Client so you can send your meter records to us via the S3 bucket.

Use of this feature is enabled if you install the library with the s3 option:

pip install amberflo-metering-python[s3]

Just pass the S3 bucket credentials to the factory function:

client = create_ingest_client(
    bucket_name=os.environ.get("BUCKET_NAME"),
    access_key=os.environ.get("ACCESS_KEY"),
    secret_key=os.environ.get("SECRET_KEY"),
)

:book: Documentation

General documentation on how to use Amberflo is available at Product Walkthrough.

The full REST API documentation is available at API Reference.

:scroll: Samples

Code samples covering different scenarios are available in the ./samples folder.

:construction_worker: Contributing

Feel free to open issues and send a pull request.

Also, check out CONTRIBUTING.md.

:bookmark_tabs: Reference

API Clients

Ingest

from metering.ingest import (
    create_ingest_payload,
    create_ingest_client,
)

Customer

from metering.customer import (
    CustomerApiClient,
    create_customer_payload,
)

Usage

from metering.usage import (
    AggregationType,
    Take,
    TimeGroupingInterval,
    TimeRange,
    UsageApiClient,
    create_usage_query,
    create_all_usage_query,
)

Customer Portal Session

from metering.customer_portal_session import (
    CustomerPortalSessionApiClient,
    create_customer_portal_session_payload,
)

Customer Prepaid Order

from metering.customer_prepaid_order import (
    BillingPeriod,
    BillingPeriodUnit,
    CustomerPrepaidOrderApiClient,
    create_customer_prepaid_order_payload,
)

Customer Product Invoice

from metering.customer_product_invoice import (
    CustomerProductInvoiceApiClient,
    create_all_invoices_query,
    create_latest_invoice_query,
    create_invoice_query,
)

Customer Product Plan

from metering.customer_product_plan import (
    CustomerProductPlanApiClient,
    create_customer_product_plan_payload,
)

Exceptions

from metering.exceptions import ApiError

Logging

amberflo-metering-python uses the standard Python logging framework. By default, logging is and set at the WARNING level.

The following loggers are used:

  • metering.ingest.producer
  • metering.ingest.s3_client
  • metering.ingest.consumer
  • metering.session.ingest_session
  • metering.session.api_session

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

amberflo-metering-python-3.1.0.tar.gz (16.8 kB view details)

Uploaded Source

Built Distribution

amberflo_metering_python-3.1.0-py2.py3-none-any.whl (22.6 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file amberflo-metering-python-3.1.0.tar.gz.

File metadata

File hashes

Hashes for amberflo-metering-python-3.1.0.tar.gz
Algorithm Hash digest
SHA256 f60f7269ac72ca8800a0cd48bc05353b6185ee1d5eebd73194600411d84aefdd
MD5 28267515ad15d976edd7704c88be9060
BLAKE2b-256 d786dc769203f3611b0988683725c5dac12fd104b8de17a1189c8b4589bc7366

See more details on using hashes here.

File details

Details for the file amberflo_metering_python-3.1.0-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for amberflo_metering_python-3.1.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 b93f6241730a9599e00f785e3c44ca7344e3a799de317c4890f0f742f0d96ba5
MD5 1681212f601706d19c821c3ef02f0823
BLAKE2b-256 95d8d417ffe91c1256f27d1d8ce837d9468c54f75a144a271e2a07cce4e4e02e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page