Skip to main content

A ultra-high performance package for sending requests to Baseten Embedding Inference'

Project description

High performance client for Baseten.co

This library provides a high-performance Python client for Baseten.co endpoints including embeddings, reranking, and classification. It was built for massive concurrent post requests to any URL, also outside of baseten.co. PerformanceClient releases the GIL while performing requests in the Rust, and supports simultaneous sync and async usage. It was benchmarked with >1200 rps per client in our blog. PerformanceClient is built on top of pyo3, reqwest and tokio and is MIT licensed.

benchmarks

Installation

pip install baseten_performance_client

Usage

import os
import asyncio
from baseten_performance_client import PerformanceClient, OpenAIEmbeddingsResponse, RerankResponse, ClassificationResponse

api_key = os.environ.get("BASETEN_API_KEY")
base_url_embed = "https://model-yqv4yjjq.api.baseten.co/environments/production/sync"
# Also works with OpenAI or Mixedbread.
# base_url_embed = "https://api.openai.com" or "https://api.mixedbread.com"

# Basic client setup
client = PerformanceClient(base_url=base_url_embed, api_key=api_key)

# Advanced setup with HTTP version selection and connection pooling
from baseten_performance_client import HttpClientWrapper
http_wrapper = HttpClientWrapper(http_version=1)  # HTTP/1.1 (default)
advanced_client = PerformanceClient(
    base_url=base_url_embed,
    api_key=api_key,
    http_version=1,  # HTTP/1.1
    client_wrapper=http_wrapper  # Share connection pool
)

Embeddings

Synchronous Embedding

from baseten_performance_client import RequestProcessingPreference

texts = ["Hello world", "Example text", "Another sample"]
preference = RequestProcessingPreference(
    batch_size=16,
    max_concurrent_requests=32,
    timeout_s=360,
    max_chars_per_request=256000,  # Character limit per request
    hedge_delay=0.5,  # Enable hedging with 0.5s delay
    total_timeout_s=360  # Total operation timeout
)
response = client.embed(
    input=texts,
    model="my_model",
    preference=preference
)

# Accessing embedding data
print(f"Model used: {response.model}")
print(f"Total tokens used: {response.usage.total_tokens}")
print(f"Total time: {response.total_time:.4f}s")
if response.individual_batch_request_times:
    for i, batch_time in enumerate(response.individual_batch_request_times):
        print(f"  Time for batch {i}: {batch_time:.4f}s")

for i, embedding_data in enumerate(response.data):
    print(f"Embedding for text {i} (original input index {embedding_data.index}):")
    # embedding_data.embedding can be List[float] or str (base64)
    if isinstance(embedding_data.embedding, list):
        print(f"  First 3 dimensions: {embedding_data.embedding[:3]}")
        print(f"  Length: {len(embedding_data.embedding)}")

# Using the numpy() method (requires numpy to be installed)
import numpy as np
numpy_array = response.numpy()
print("\nEmbeddings as NumPy array:")
print(f"  Shape: {numpy_array.shape}")
print(f"  Data type: {numpy_array.dtype}")
if numpy_array.shape[0] > 0:
    print(f"  First 3 dimensions of the first embedding: {numpy_array[0][:3]}")

Note: The embed method is versatile and can be used with any embeddings service, e.g. OpenAI API embeddings, not just for Baseten deployments.

Asynchronous Embedding

async def async_embed():
    from baseten_performance_client import RequestProcessingPreference

    texts = ["Async hello", "Async example"]
    preference = RequestProcessingPreference(
        batch_size=16,
        max_concurrent_requests=32,
        timeout_s=360,
        max_chars_per_request=256000,  # Character limit per request
        hedge_delay=0.5,  # Enable hedging with 0.5s delay
        total_timeout_s=360  # Total operation timeout
    )
    response = await client.async_embed(
        input=texts,
        model="my_model",
        preference=preference
    )
    print("Async embedding response:", response.data)

# To run:
# asyncio.run(async_embed())

Embedding Benchmarks

Comparison against pip install openai for /v1/embeddings. Tested with the ./scripts/compare_latency_openai.py with mini_batch_size of 128, and 4 server-side replicas. Results with OpenAI similar, OpenAI allows a max mini_batch_size of 2048.

Number of inputs / embeddings Number of Tasks PerformanceClient (s) AsyncOpenAI (s) Speedup
128 1 0.12 0.13 1.08×
512 4 0.14 0.21 1.50×
8 192 64 0.83 1.95 2.35×
131 072 1 024 4.63 39.07 8.44×
2 097 152 16 384 70.92 903.68 12.74×

General Batch POST

The batch_post method is generic. It can be used to send POST requests to any URL, not limited to Baseten endpoints. The input and output can be any JSON item.

Synchronous Batch POST

from baseten_performance_client import RequestProcessingPreference

payload1 = {"model": "my_model", "input": ["Batch request sample 1"]}
payload2 = {"model": "my_model", "input": ["Batch request sample 2"]}
preference = RequestProcessingPreference(
    max_concurrent_requests=32,
    timeout_s=360,
    hedge_delay=0.5,  # Enable hedging with 0.5s delay
    total_timeout_s=360,  # Total operation timeout
    extra_headers={"x-custom-header": "value"}  # Custom headers
)
response_obj = client.batch_post(
    url_path="/v1/embeddings", # Example path, adjust to your needs
    payloads=[payload1, payload2],
    preference=preference
)
print(f"Total time for batch POST: {response_obj.total_time:.4f}s")
for i, (resp_data, headers, time_taken) in enumerate(zip(response_obj.data, response_obj.response_headers, response_obj.individual_request_times)):
    print(f"Response {i+1}:")
    print(f"  Data: {resp_data}")
    print(f"  Headers: {headers}")
    print(f"  Time taken: {time_taken:.4f}s")

Asynchronous Batch POST

async def async_batch_post_example():
    from baseten_performance_client import RequestProcessingPreference

    payload1 = {"model": "my_model", "input": ["Async batch sample 1"]}
    payload2 = {"model": "my_model", "input": ["Async batch sample 2"]}
preference = RequestProcessingPreference(
    max_concurrent_requests=32,
    timeout_s=360,
    hedge_delay=0.5,  # Enable hedging with 0.5s delay
    total_timeout_s=360,  # Total operation timeout
    extra_headers={"x-custom-header": "value"}  # Custom headers
)
    response_obj = await client.async_batch_post(
        url_path="/v1/embeddings",
        payloads=[payload1, payload2],
        preference=preference
    )
    print(f"Async total time for batch POST: {response_obj.total_time:.4f}s")
    for i, (resp_data, headers, time_taken) in enumerate(zip(response_obj.data, response_obj.response_headers, response_obj.individual_request_times)):
        print(f"Async Response {i+1}:")
        print(f"  Data: {resp_data}")
        print(f"  Headers: {headers}")
        print(f"  Time taken: {time_taken:.4f}s")

# To run:
# asyncio.run(async_batch_post_example())

Reranking

Reranking compatible with BEI or text-embeddings-inference.

Synchronous Reranking

from baseten_performance_client import RequestProcessingPreference

query = "What is the best framework?"
documents = ["Doc 1 text", "Doc 2 text", "Doc 3 text"]
preference = RequestProcessingPreference(
    batch_size=16,
    max_concurrent_requests=32,
    timeout_s=360,
    max_chars_per_request=256000,  # Character limit per request
    hedge_delay=0.5,  # Enable hedging with 0.5s delay
    total_timeout_s=360  # Total operation timeout
)
rerank_response = client.rerank(
    query=query,
    texts=documents,
    model="rerank-model",  # Optional model specification
    return_text=True,
    preference=preference
)
for res in rerank_response.data:
    print(f"Index: {res.index} Score: {res.score}")

Asynchronous Reranking

async def async_rerank():
    from baseten_performance_client import RequestProcessingPreference

    query = "Async query sample"
    docs = ["Async doc1", "Async doc2"]
    preference = RequestProcessingPreference(
        batch_size=16,
        max_concurrent_requests=32,
        timeout_s=360,
        max_chars_per_request=256000,  # Character limit per request
        hedge_delay=0.5,  # Enable hedging with 0.5s delay
        total_timeout_s=360  # Total operation timeout
    )
    response = await client.async_rerank(
        query=query,
        texts=docs,
        model="rerank-model",  # Optional model specification
        return_text=True,
        preference=preference
    )
    for res in response.data:
        print(f"Async Index: {res.index} Score: {res.score}")

# To run:
# asyncio.run(async_rerank())

Classification

Predict (classification endpoint) compatible with BEI or text-embeddings-inference.

Synchronous Classification

from baseten_performance_client import RequestProcessingPreference

texts_to_classify = [
    "This is great!",
    "I did not like it.",
    "Neutral experience."
]
preference = RequestProcessingPreference(
    batch_size=16,
    max_concurrent_requests=32,
    timeout_s=360,
    max_chars_per_request=256000,  # Character limit per request
    hedge_delay=0.5,  # Enable hedging with 0.5s delay
    total_timeout_s=360  # Total operation timeout
)
classify_response = client.classify(
    inputs=texts_to_classify,
    model="classification-model",  # Optional model specification
    preference=preference
)
for group in classify_response.data:
    for result in group:
        print(f"Label: {result.label}, Score: {result.score}")

Asynchronous Classification

async def async_classify():
    from baseten_performance_client import RequestProcessingPreference

    texts = ["Async positive", "Async negative"]
    preference = RequestProcessingPreference(
        batch_size=16,
        max_concurrent_requests=32,
        timeout_s=360,
        max_chars_per_request=256000,  # Character limit per request
        hedge_delay=0.5,  # Enable hedging with 0.5s delay
        total_timeout_s=360  # Total operation timeout
    )
    response = await client.async_classify(
        inputs=texts,
        model="classification-model",  # Optional model specification
        preference=preference
    )
    for group in response.data:
        for res in group:
            print(f"Async Label: {res.label}, Score: {res.score}")

# To run:
# asyncio.run(async_classify())

Advanced Features

RequestProcessingPreference

The RequestProcessingPreference class provides a unified way to configure all request processing parameters. This is the recommended approach for advanced configuration as it provides better type safety and clearer intent.

from baseten_performance_client import RequestProcessingPreference

# Create a preference with custom settings
preference = RequestProcessingPreference(
    max_concurrent_requests=64,        # Parallel requests (default: 128)
    batch_size=32,                     # Items per batch (default: 128)
    timeout_s=30.0,                   # Per-request timeout (default: 3600.0)
    hedge_delay=0.5,                  # Hedging delay (default: None)
    hedge_budget_pct=0.15,            # Hedge budget percentage (default: 0.10)
    retry_budget_pct=0.08,            # Retry budget percentage (default: 0.05)
    max_retries=3,                    # Maximum HTTP retries (default: 4)
    initial_backoff_ms=250,           # Initial backoff in milliseconds (default: 125)
    total_timeout_s=300.0              # Total operation timeout (default: None)
)

# Use with any method
response = client.embed(
    input=["text1", "text2"],
    model="my_model",
    preference=preference
)

# Also works with async methods
response = await client.async_embed(
    input=["text1", "text2"],
    model="my_model",
    preference=preference
)

Property-based Configuration: You can also modify preferences after creation using property setters:

# Create preference and modify properties
preference = RequestProcessingPreference()
preference.max_concurrent_requests = 64        # Set parallel requests
preference.batch_size = 32                     # Set batch size
preference.timeout_s = 30.0                    # Set timeout
preference.hedge_delay = 0.5                   # Enable hedging
preference.hedge_budget_pct = 0.15            # Set hedge budget
preference.retry_budget_pct = 0.08            # Set retry budget
preference.max_retries = 3                     # Set max retries
preference.initial_backoff_ms = 250            # Set backoff

# Use with any method
response = client.embed(
    input=["text1", "text2"],
    model="my_model",
    preference=preference
)

Budget Percentages:

  • hedge_budget_pct: Percentage of total requests allocated for hedging (default: 10%)
  • retry_budget_pct: Percentage of total requests allocated for retries (default: 5%)
  • Maximum allowed: 300% for both budgets

Retry Configuration:

  • max_retries: Maximum number of HTTP retries (default: 4, max: 4)
  • initial_backoff_ms: Initial backoff duration in milliseconds (default: 125, range: 50-30000)
  • Backoff uses exponential backoff with jitter

Request Hedging

The client supports request hedging for improved latency by sending duplicate requests after a specified delay:

# Enable hedging with 0.5 second delay
preference = RequestProcessingPreference(
    hedge_delay=0.5,  # Send hedge request after 0.5s
    max_chars_per_request=256000,
    total_timeout_s=360
)
response = client.embed(
    input=texts,
    model="my_model",
    preference=preference
)

Custom Headers

Use custom headers with batch_post:

preference = RequestProcessingPreference(
    extra_headers={
        "x-custom-header": "value",
        "authorization": "Bearer token"
    }
)
response = client.batch_post(
    url_path="/v1/embeddings",
    payloads=payloads,
    preference=preference
)

HTTP Version Selection

Choose between HTTP/1.1 and HTTP/2:

# HTTP/1.1 (default, better for high concurrency)
client_http1 = PerformanceClient(base_url, api_key, http_version=1)

# HTTP/2 (better for single requests)
client_http2 = PerformanceClient(base_url, api_key, http_version=2)

Connection Pooling

Share connection pools across multiple clients:

from baseten_performance_client import HttpClientWrapper

# Create shared wrapper
wrapper = HttpClientWrapper(http_version=1)

# Reuse across multiple clients
client1 = PerformanceClient(base_url="https://api1.example.com", client_wrapper=wrapper)
client2 = PerformanceClient(base_url="https://api2.example.com", client_wrapper=wrapper)

HTTP Proxy Support

Route all HTTP requests through a proxy (e.g., for connection pooling with Envoy):

from baseten_performance_client import HttpClientWrapper

# Create wrapper with HTTP proxy
wrapper = HttpClientWrapper(
    http_version=1,
    proxy="http://envoy-proxy.local:8080"
)

# Share the wrapper across multiple clients
client1 = PerformanceClient(
    base_url="https://api1.example.com",
    api_key="your_key",
    client_wrapper=wrapper
)
client2 = PerformanceClient(
    base_url="https://api2.example.com",
    api_key="your_key",
    client_wrapper=wrapper
)
# Both clients will use the same connection pool and proxy

You can also specify the proxy directly when creating a client:

client = PerformanceClient(
    base_url="https://api.example.com",
    api_key="your_key",
    proxy="http://envoy-proxy.local:8080"
)

Error Handling

The client can raise several types of errors. Here's how to handle common ones:

  • requests.exceptions.HTTPError: This error is raised for HTTP issues, such as authentication failures (e.g., 403 Forbidden if the API key is wrong), server errors (e.g., 5xx), or if the endpoint is not found (404). You can inspect e.response.status_code and e.response.text (or e.response.json() if the body is JSON) for more details.
  • ValueError: This error can occur due to invalid input parameters (e.g., an empty input list for embed, invalid batch_size or max_concurrent_requests values). It can also be raised by response.numpy() if embeddings are not float vectors or have inconsistent dimensions.

Here's an example demonstrating how to catch these errors for the embed method:

import requests
from baseten_performance_client import RequestProcessingPreference

# client = PerformanceClient(base_url="your_baseten_url", api_key="your_baseten_api_key")

texts_to_embed = ["Hello world", "Another text example"]
try:
    preference = RequestProcessingPreference(
        batch_size=2,
        max_concurrent_requests=4,
        timeout_s=60 # Timeout in seconds
    )
    response = client.embed(
        input=texts_to_embed,
        model="your_embedding_model", # Replace with your actual model name
        preference=preference
    )
    # Process successful response
    print(f"Model used: {response.model}")
    print(f"Total tokens: {response.usage.total_tokens}")
    for item in response.data:
        embedding_preview = item.embedding[:3] if isinstance(item.embedding, list) else "Base64 Data"
        print(f"Index {item.index}, Embedding (first 3 dims or type): {embedding_preview}")

except requests.exceptions.HTTPError as e:
    print(f"An HTTP error occurred: {e}, code {e.args[0]}")

For asynchronous methods (async_embed, async_rerank, async_classify, async_batch_post), the same exceptions will be raised by the await call and can be caught using a try...except block within an async def function.

Development

# Install prerequisites
sudo apt-get install patchelf
# Install cargo if not already installed.

# Set up a Python virtual environment
python -m venv .venv
source .venv/bin/activate

# Install development dependencies
pip install maturin[patchelf] pytest requests numpy

# Build and install the Rust extension in development mode
maturin develop
cargo fmt
# Run tests
pytest tests

Contributions

Feel free to contribute to this repo, tag @michaelfeil for review.

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

baseten_performance_client-0.1.5.tar.gz (92.3 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

baseten_performance_client-0.1.5-pp310-pypy310_pp73-manylinux_2_28_armv7l.whl (4.8 MB view details)

Uploaded PyPymanylinux: glibc 2.28+ ARMv7l

baseten_performance_client-0.1.5-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl (5.7 MB view details)

Uploaded PyPymanylinux: glibc 2.28+ ARM64

baseten_performance_client-0.1.5-pp39-pypy39_pp73-manylinux_2_28_armv7l.whl (4.9 MB view details)

Uploaded PyPymanylinux: glibc 2.28+ ARMv7l

baseten_performance_client-0.1.5-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl (5.7 MB view details)

Uploaded PyPymanylinux: glibc 2.28+ ARM64

baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_x86_64.whl (5.7 MB view details)

Uploaded CPython 3.13tmusllinux: musl 1.2+ x86-64

baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_i686.whl (5.5 MB view details)

Uploaded CPython 3.13tmusllinux: musl 1.2+ i686

baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_armv7l.whl (5.1 MB view details)

Uploaded CPython 3.13tmusllinux: musl 1.2+ ARMv7l

baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_aarch64.whl (6.0 MB view details)

Uploaded CPython 3.13tmusllinux: musl 1.2+ ARM64

baseten_performance_client-0.1.5-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.4 MB view details)

Uploaded CPython 3.13tmanylinux: glibc 2.17+ x86-64

baseten_performance_client-0.1.5-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl (5.7 MB view details)

Uploaded CPython 3.13tmanylinux: glibc 2.17+ ppc64le

baseten_performance_client-0.1.5-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl (5.5 MB view details)

Uploaded CPython 3.13tmanylinux: glibc 2.17+ i686

baseten_performance_client-0.1.5-cp313-cp313t-macosx_11_0_arm64.whl (3.1 MB view details)

Uploaded CPython 3.13tmacOS 11.0+ ARM64

baseten_performance_client-0.1.5-cp313-cp313t-macosx_10_12_x86_64.whl (3.1 MB view details)

Uploaded CPython 3.13tmacOS 10.12+ x86-64

baseten_performance_client-0.1.5-cp38-abi3-win_amd64.whl (2.8 MB view details)

Uploaded CPython 3.8+Windows x86-64

baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_x86_64.whl (5.7 MB view details)

Uploaded CPython 3.8+musllinux: musl 1.2+ x86-64

baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_i686.whl (5.5 MB view details)

Uploaded CPython 3.8+musllinux: musl 1.2+ i686

baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_armv7l.whl (5.1 MB view details)

Uploaded CPython 3.8+musllinux: musl 1.2+ ARMv7l

baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_aarch64.whl (6.0 MB view details)

Uploaded CPython 3.8+musllinux: musl 1.2+ ARM64

baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_28_armv7l.whl (4.9 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.28+ ARMv7l

baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_28_aarch64.whl (5.7 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.28+ ARM64

baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.4 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.17+ x86-64

baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl (5.7 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.17+ ppc64le

baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_17_i686.manylinux2014_i686.whl (5.5 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.17+ i686

baseten_performance_client-0.1.5-cp38-abi3-macosx_11_0_arm64.whl (3.1 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

baseten_performance_client-0.1.5-cp38-abi3-macosx_10_12_x86_64.whl (3.2 MB view details)

Uploaded CPython 3.8+macOS 10.12+ x86-64

File details

Details for the file baseten_performance_client-0.1.5.tar.gz.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5.tar.gz
Algorithm Hash digest
SHA256 8e6999295fa2a5ce54aeb83f1c04417628282b66117c22101d023c746522816a
MD5 1e55f6507ea2e895371d5a525a72edfb
BLAKE2b-256 41ebdc8a0dd381cdac0132ea0f7833e707413a137f0590a849504e03d40776eb

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 592744c6ee0c5fa366d26a62fbe24dee0bce5b7f0eab50563ddce3a4d2eb9193
MD5 a7fe2ef2f10809fb9647a928724130d9
BLAKE2b-256 bbcc0d1439d043f15e0b65bac6d49dec1002c8b94959d141d546927d5c568d32

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-pp311-pypy311_pp73-manylinux_2_17_i686.manylinux2014_i686.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-pp311-pypy311_pp73-manylinux_2_17_i686.manylinux2014_i686.whl
Algorithm Hash digest
SHA256 b349015ef7998aadb238dbc9178c99ca8be908805784cc7415ad9bf31d7552c3
MD5 049f165defe564e3a66ee9942d31a9c4
BLAKE2b-256 953a1def52754477403bb58e7c0ba9aca4e3322847be59c5951e5a544b2f807b

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-pp310-pypy310_pp73-manylinux_2_28_armv7l.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-pp310-pypy310_pp73-manylinux_2_28_armv7l.whl
Algorithm Hash digest
SHA256 b1b8c8f6a4a421d961adea80bafd1ae9cf5fa9ca1b711e3342975a3c15eeb145
MD5 d3bf57d5c39fecaf6a0b90a7ebd9e9a9
BLAKE2b-256 4f50ade2a69531ba7c485ddd3a6505be91aac30fd45d1a7edaaa30320926d8d2

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 c6a9229a3e9334d99abed9cd6b9e47a3f4bbf564ff4ab36a44c2c4784136fda9
MD5 eedf317c22f6df1f1e9779613cd9fed0
BLAKE2b-256 a2f8d7a6a51aa2ca2870457d8a6e22b3c789e06c75a42c4ae310c7376ea2ba57

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-pp310-pypy310_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-pp310-pypy310_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl
Algorithm Hash digest
SHA256 9a06ccad0ec7a6d7ea3cfd59c65d3dd578c7d98bf7d2d0901d348195a5db7cfd
MD5 508f19ba646e83e20293426d5872c799
BLAKE2b-256 9953def8115561be1d7a4c7ded974400015622710b76d7562452482f6cc6a968

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-pp39-pypy39_pp73-manylinux_2_28_armv7l.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-pp39-pypy39_pp73-manylinux_2_28_armv7l.whl
Algorithm Hash digest
SHA256 add5b5f21c68e456c7bb305f775458059f29995cb3df70d5f184f5e27bd59025
MD5 a24d48f99f59df1ddc6145e80bc5a330
BLAKE2b-256 38dda1b39befbbd7e4e6031a28c99d288e8d427c9df55f4a04c9e3b96193789d

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 764e7e8278e1c87038cd6159b2d37cb0c39e390c59bbd9551cee9c688dd1431f
MD5 fdb36cca5be0d57aa7988c24ab50254d
BLAKE2b-256 9de20c0d48d0126b786070b1bc160f51a08715b0b241ce10d6b99de916b580de

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-pp39-pypy39_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-pp39-pypy39_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl
Algorithm Hash digest
SHA256 b7cdd9d8394bb1cfc24b2f23f1cfcc6336d28212d11adfb944624bc7f48c8c37
MD5 a8a21ca793fdc5ba6e00b851fcb02c0a
BLAKE2b-256 bb970043ed4599c60e4cbacd84f2fe4f2abbeeef4eef174ad374c490e0907128

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 df0a073a1a9fb55992e2a96bff9e1d06d75c58ce2c5a4a5c84b950e9d1550834
MD5 18f75752259b340e4056f2fd2b036d0d
BLAKE2b-256 8bc7783c8bad9c2603b6f68d103b8a9e9a125751b9c482604d4be60fc59688b4

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_i686.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_i686.whl
Algorithm Hash digest
SHA256 a1d7acbc55a716ad7664e4170179f45fa262b9da64c10452fadeec7d3b2fbbf8
MD5 3f1a35e98c6994ee567e2310f8483fc9
BLAKE2b-256 71d7750149a0a3b2a598fcf953373d8bb5033cf254a5dbc11adfdabb716d9e37

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_armv7l.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_armv7l.whl
Algorithm Hash digest
SHA256 e9f1227c0a7cc9297629da8a41bc03fe4c91298776d11207a865c30e3463ab39
MD5 83c2a060093aecc0a7dd3e018b7779d0
BLAKE2b-256 20a81254e1f6b623e352faf687012a26911faf85e99f078b5c2b4b1cb416f2fd

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_aarch64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp313-cp313t-musllinux_1_2_aarch64.whl
Algorithm Hash digest
SHA256 1d23d805446520e61234d5c97e02eaf9eed3a49ad73f4ba7e9c120000b8fbbfa
MD5 0aef8299cfd5fa309f94c902666a5c95
BLAKE2b-256 7c37858a1ce843b69168a92009550315b87266c39c5d8b4c7fa72185017a07f8

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 8f89b3c38693bfc3b973b6cf02ec679ef914b2d9ef77f9ab182fd9d5b5f2e61b
MD5 e35f34ba72e2b6a18b045b88ab94b07f
BLAKE2b-256 4a7a0968e269ccc2a50a5f2092df8b4b43a3bdc02c96ffeed4d70e18469a2234

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl
Algorithm Hash digest
SHA256 a89576b0c3872aa5d0fe29b2d3a44fd0d61b930e8b1293ce4d985b960584fa2e
MD5 18e7440fa81fe055dd805283aefa9545
BLAKE2b-256 e839caf41d87bc1a0bf61deb398593d7a64f8b8757f6208c41168d62808c1aab

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl
Algorithm Hash digest
SHA256 e4651b2416a1a90fc3afb4c43c14a5aa51896f9ffc31d2f366515c405dd03c94
MD5 316652c0ae52d6316221cc8601e0c3ee
BLAKE2b-256 86975d1cad74be84a38e6046c3de88b761e0400fc5dea533a4d11aebea78a8cb

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp313-cp313t-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp313-cp313t-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b8eade7428cfca9d19a0e7a8b476bd70d7e48bcefc339a29e457d16edc0dc2da
MD5 632f7f12dbf34724e3a71e8da1bfa442
BLAKE2b-256 e2aeb8db03a7ad89f2f13e778f3051487a4f12b4bd0acbd701ae6dd3da1d2533

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp313-cp313t-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp313-cp313t-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 482955696ae2520c6dff144f20b49d1f520c905d9030c8055c5afac62c220b2c
MD5 8d9a707f59e7a35c358998beb76c939e
BLAKE2b-256 1dfa98a00aba838856b7ec4acd4b397dfb7a8a29575787e5849503402f4030cd

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 f00aa89592ff53418dde223a50b82b6a01f3c5315bf188a36687e0ce7cd0b6e1
MD5 060ad11d136c505d1febc69e8a04c102
BLAKE2b-256 15f4ad409cc286dc61d6bdab92ac3f4f00545f5988c2dbda481c446cabb3c8b3

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 c4c4bef13dfb0e1b1436749b4d21d21179cb066b38d94789ddd6ba3e3566eea9
MD5 88b718c3f556df89c697d5d87ca71397
BLAKE2b-256 e138c9bb285592afc400fda1b7fd54b4b9758b468062d8db68048aebe35e88ff

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_i686.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_i686.whl
Algorithm Hash digest
SHA256 5a1d60f380dda875aeb35ed31aa1d859b9e04f0eba4e46ca1a62e0ba35feb5f2
MD5 f80c2e3e68633959c421733cf961016c
BLAKE2b-256 f0d64f386ed0825b8799031cc89e123e34fb78bf16858db6a8f959e21f5906fb

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_armv7l.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_armv7l.whl
Algorithm Hash digest
SHA256 9d17185b3ccf6967517a2bf9605b5f6835f25beab547876781e91f92c0874b38
MD5 a0699d0fa9f9dfbdc0ae921fb43815d3
BLAKE2b-256 647a8ed76be21f70b269b0c31f79b8c25eac0caa2c86180094f1c40f5f14072a

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_aarch64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-musllinux_1_2_aarch64.whl
Algorithm Hash digest
SHA256 61e26074ea8401441638f034f793cc44e860c42ef32d7387f217383cc2529a99
MD5 c8d7af43d6aa0f9b44c6a368ce729f2c
BLAKE2b-256 465f4ca4ad5d22415f9c334318aeba20e6324b5c06501d07caca32aa3d2333b5

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_28_armv7l.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_28_armv7l.whl
Algorithm Hash digest
SHA256 1e6436ccd2e16b347e7c56cbaced4c6d79c908a8315fa49087a46ab9424c65c6
MD5 8d88233d0f091014198d4fb86554f261
BLAKE2b-256 6c9ee3fe5724f328ab55b3e0a58d1d24432ab0131a9bb83b9181a129a3ec9e80

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 e63398c779fd9f02c194810b83aa6d133a8ef2b10840335d62369ba1c8f1fb0a
MD5 fc1da6616813237a737193a7e564c71b
BLAKE2b-256 bc74e9314f9fbb3ed482a076ae1365ff25434df2aa02b279942227b5232b63fd

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 a646d9cd6f4ecd2732bfd4fcb279934e982bbdd589770d9262a57de8f64d7d92
MD5 8abfc5dd1a81af46ff67fab781ad2a21
BLAKE2b-256 60565bb14d8caf8d86e8c29679276c9a7de9ac6c5b6e6e807cd7607b82ec1ca7

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl
Algorithm Hash digest
SHA256 05ebb680e6ae7471be67edaa570a9e37bbed4c8f127cd357ecebddad149f58c7
MD5 0c6cdaa33cc01dcfc8c06589d7c96a39
BLAKE2b-256 3140826100d99ba83fa087702fa2b8482de03efb955c8ef7467adf2088554eb6

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_17_i686.manylinux2014_i686.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-manylinux_2_17_i686.manylinux2014_i686.whl
Algorithm Hash digest
SHA256 4289efa45a0b58030b8afd39de49cfb13ae3df2a6d6fcff914c05050201a53d4
MD5 6148e02b583c1b8020bb916d7b540c8f
BLAKE2b-256 2eda36237d3cf7d9d4d703d1b49c6a2404c82bd52b2825c2cca51e9571292122

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 549774519199a9dd29d447c121d16d0539e521724cc84b86d265a85d37b2efcd
MD5 23bb4dc497cacbd0d95411e495c110f9
BLAKE2b-256 79382e92a0ce0a6b2c00b4ebc4654759c57b445df4c734f5e7526e9892f97947

See more details on using hashes here.

File details

Details for the file baseten_performance_client-0.1.5-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for baseten_performance_client-0.1.5-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 13c3b0cadcacb83499050fa14d0eaa5fd161c05bc7ff8cc88c3150e663d21993
MD5 57f61abf444c7d7fa5087bc78794b413
BLAKE2b-256 5a95b9df7ae97cbb8d8b748ef1237e638bdc3ea5d1fea9ded8b6e03a63b8644d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page