Skip to main content

Async Python client for the ISBNdb REST API v2 with caching and rate limiting.

Project description

isbnator

Async Python client for the ISBNdb REST API v2 with built-in caching (file / DynamoDB / none) and rate limiting.

Installation

pip install -r requirements.txt

Dependencies: httpx, pydantic, dynamorator, logorator, python-dotenv

API Key

Resolved in order:

  1. api_key constructor argument
  2. ISBNDB_TOKEN environment variable
  3. Raises ISBNdbConfigError if neither is set

Client Initialization

from isbnator import ISBNdbClient

client = ISBNdbClient(
    api_key="<key>",               # or set ISBNDB_TOKEN env var
    cache="file",                  # "file" | "dynamodb" | "none" (default: "none")
    cache_dir=".isbndb_cache",     # directory for file cache (default: ".isbndb_cache")
    dynamo_table="isbndb-cache",   # DynamoDB table name (required if cache="dynamodb")
    cache_ttl_days=90,             # cache TTL in days (default: 90)
    max_concurrent=5,              # max parallel requests in batch (default: 5)
    max_retries=3,                 # max retries per request (default: 3)
    requests_per_second=1,         # per-second throttle (default: 1)
    request_timeout=30,            # request timeout in seconds (default: 30)
    rate_limit_warn_threshold=100, # warn when remaining quota drops below this (default: 100)
)

# Use as async context manager
async with ISBNdbClient(cache="file") as client:
    ...

Models

SearchQuery (input for batch_search)

class SearchQuery(BaseModel):
    title: str                   # required — search text
    author: str | None = None    # optional — filter by author

Book (parsed from ISBNdb responses)

class Book(BaseModel):
    title: str | None = None
    title_long: str | None = None
    isbn: str | None = None           # ISBN-10 or ISBN-13 (ISBNdb is inconsistent)
    isbn13: str | None = None         # ISBN-13
    authors: list[str] = []
    publisher: str | None = None
    date_published: str | None = None # format varies: "2015-05-12", "2015", "May 2015"
    binding: str | None = None        # e.g. "Paperback", "Hardcover", "Audio CD"
    pages: int | None = None
    dimensions: str | None = None
    image: str | None = None          # cover image URL
    language: str | None = None       # e.g. "en", "es"
    edition: str | None = None
    subjects: list[str] = []
    synopsis: str | None = None
    msrp: float | None = None

All fields are optional. Unknown fields from ISBNdb are silently ignored (extra="ignore").

QueryResult (returned by all query methods)

class QueryResult(BaseModel):
    status: str              # "success" | "not_found" | "error"
    books: list[Book] = []   # parsed Book models
    from_cache: bool = False # True if result was served from cache
    error: str | None = None # error message when status == "error"
    data: dict | None = None # raw JSON response from ISBNdb (None when from cache)

RateLimit (returned by quota())

class RateLimit(BaseModel):
    daily_limit: int = 5000
    used: int = 0
    remaining: int = 5000

Methods

All methods are async and return QueryResult unless noted.

book(isbn: str) -> QueryResult

Lookup a single book by ISBN-10 or ISBN-13.

result = await client.book("9780143127550")
# result.status == "success"
# result.books == [Book(title="Everything I Never Told You - A Novel", ...)]

books(query, column=None, page=1, page_size=20) -> QueryResult

Full-text search. column can be "title", "author", "date_published", etc.

result = await client.books("Gatsby", column="title")
# result.books == [Book(...), Book(...), ...]  (up to page_size)

author(name: str) -> QueryResult

Search by author name.

result = await client.author("F. Scott Fitzgerald")

publisher(name: str) -> QueryResult

Search by publisher name.

result = await client.publisher("Penguin")

search(**kwargs) -> QueryResult

Multi-field search via /search/books. Supported kwargs: text, author, isbn, isbn13, subject, publisher. None values are excluded.

result = await client.search(text="Great Gatsby", author="Fitzgerald")

batch_search(queries, on_progress=None) -> list[QueryResult]

Primary use case. Sends multiple queries with controlled concurrency. Returns results in the same order as input.

from isbnator import SearchQuery

queries = [
    SearchQuery(title="The Great Gatsby", author="F. Scott Fitzgerald"),
    SearchQuery(title="1984", author="George Orwell"),
    SearchQuery(title="xyznonexistent"),
]

results = await client.batch_search(queries)
# len(results) == len(queries), same order

for query, result in zip(queries, results):
    print(query.title, result.status, len(result.books), result.from_cache)
    # "The Great Gatsby" "success" 20 False
    # "1984"             "success" 20 False
    # "xyznonexistent"   "not_found" 0 False

Behavior:

  • Checks cache first (batch), only queries ISBNdb for misses
  • Runs misses with max_concurrent parallelism, throttled by requests_per_second
  • Caches both successful and not_found results (negative caching)
  • Each query is cached immediately when its API response arrives (not after the full batch)
  • Failed queries return QueryResult(status="error", error="...") — never raises

Progress callback

Pass an optional on_progress callback to receive events as the batch progresses. The callback is synchronous (not awaited) and receives a plain dict.

results = await client.batch_search(queries, on_progress=lambda e: print(e))

Callback signature: Callable[[dict], None] | None

Events emitted:

batch_cache_checked — emitted once after the initial cache lookup:

{
    "event": "batch_cache_checked",
    "total": 11,       # total queries in the batch
    "cached": 4,       # how many were found in cache
    "to_fetch": 7,     # how many need API calls
}

query_complete — emitted after each individual query finishes (cache hit or API call):

{
    "event": "query_complete",
    "completed": 3,          # how many queries done so far
    "total": 11,             # total queries in the batch
    "query": SearchQuery,    # the SearchQuery that was processed
    "status": "success",     # "success" | "not_found" | "error"
    "from_cache": False,     # True if result came from cache
    "books": 20,             # number of books in the result
}

Example with SSE streaming:

async with ISBNdbClient(cache="dynamodb", dynamo_table="my-table") as client:
    results = await client.batch_search(
        queries,
        on_progress=lambda e: emitter.emit(e),
    )

quota() -> RateLimit

Fetches API usage stats from /stats.

q = await client.quota()
# q.daily_limit == 5000, q.used == 42, q.remaining == 4958

Also accessible as client.rate_limit property (updated after calling quota()).

close()

Closes the underlying HTTP client. Called automatically when using async with.

Caching

Three cache backends, selected via cache constructor param.

Backend cache= Notes
None "none" Every call hits ISBNdb (default)
File "file" One JSON file per query in cache_dir
DynamoDB "dynamodb" Via dynamorator.DynamoDBStore, requires dynamo_table

Cache keys are SHA-256 hashes of the endpoint path + sorted query params. Both successful and not_found results are cached with the configured TTL. File cache checks expiry on read and deletes stale entries.

Exceptions

ISBNdbError              # base exception
├── ISBNdbConfigError    # missing API key or bad config
├── ISBNdbRateLimitError # daily quota exhausted (remaining == 0)
└── ISBNdbAPIError       # non-retryable API error
    .status_code: int
    .message: str

Exceptions are raised for config issues and quota exhaustion. In batch_search, individual query failures are captured in QueryResult(status="error"), not raised.

Retry Logic

Retries on: HTTP 429, 5xx, connection errors, timeouts.

  • Exponential backoff with jitter (1s, 2s, 4s base)
  • Respects Retry-After header on 429
  • After max_retries exhausted: raises ISBNdbAPIError (single queries) or returns error QueryResult (batch)

Imports

from isbnator import (
    ISBNdbClient,
    SearchQuery,
    QueryResult,
    Book,
    BooksResponse,
    RateLimit,
    ISBNdbError,
    ISBNdbConfigError,
    ISBNdbRateLimitError,
    ISBNdbAPIError,
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

isbnator-0.0.3.tar.gz (13.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

isbnator-0.0.3-py3-none-any.whl (12.7 kB view details)

Uploaded Python 3

File details

Details for the file isbnator-0.0.3.tar.gz.

File metadata

  • Download URL: isbnator-0.0.3.tar.gz
  • Upload date:
  • Size: 13.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for isbnator-0.0.3.tar.gz
Algorithm Hash digest
SHA256 f77f6bb2b2bb6b36120cadb2d11ba190c7639e1c7f072ef77599424141b26707
MD5 ce0c7028b0eb096e73e3fec5d317de95
BLAKE2b-256 60210d78862ef43045592bff4571dfd3f4761c5d60513af8ae0b6e22a7521dd6

See more details on using hashes here.

File details

Details for the file isbnator-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: isbnator-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 12.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for isbnator-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 0ad977cfc1e639b27858e232e4c3e958b17ba0c6de88049e5d282fe21e15ec8e
MD5 e2d17a8bc337ddbb76b7846ee7aa97fa
BLAKE2b-256 e53c8c99926ebb5f10561e2000b7e9b9b20db5d5db49a417b59dc77caeb27c37

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page