Skip to main content

Python SDK for Vercel

Project description

Vercel Python SDK

Installation

pip install vercel

Requirements

  • Python 3.10+

Usage

This package provides both synchronous and asynchronous clients to interact with the Vercel API.



Headers and request context

from typing import Callable

from fastapi import FastAPI, Request
from vercel.headers import geolocation, ip_address, set_headers

app = FastAPI()

@app.middleware("http")
async def vercel_context_middleware(request: Request, call_next: Callable):
    set_headers(request.headers)
    return await call_next(request)

@app.get("/api/headers")
async def headers_info(request: Request):
    ip = ip_address(request.headers)
    geo = geolocation(request)
    return {"ip": ip, "geo": geo}


Runtime Cache

Sync

from vercel.cache import get_cache

def main():
    cache = get_cache(namespace="demo")

    cache.delete("greeting")
    cache.set("greeting", {"hello": "world"}, {"ttl": 60, "tags": ["demo"]})
    value = cache.get("greeting")  # dict or None
    cache.expire_tag("demo")        # invalidate by tag

Sync Client

from vercel.cache import RuntimeCache

cache = RuntimeCache(namespace="demo")

def main():
    cache.delete("greeting")
    cache.set("greeting", {"hello": "world"}, {"ttl": 60, "tags": ["demo"]})
    value = cache.get("greeting")  # dict or None
    cache.expire_tag("demo")        # invalidate by tag

Async

from vercel.cache.aio import get_cache

async def main():
    cache = get_cache(namespace="demo")

    await cache.delete("greeting")
    await cache.set("greeting", {"hello": "world"}, {"ttl": 60, "tags": ["demo"]})
    value = await cache.get("greeting")  # dict or None
    await cache.expire_tag("demo")        # invalidate by tag

Async Client

from vercel.cache import AsyncRuntimeCache

cache = AsyncRuntimeCache(namespace="demo")

async def main():
    await await cache.delete("greeting")
    await await cache.set("greeting", {"hello": "world"}, {"ttl": 60, "tags": ["demo"]})
    value = await cache.get("greeting")  # dict or None
    await cache.expire_tag("demo")        # invalidate by tag



Vercel OIDC Tokens

from typing import Callable

from fastapi import FastAPI, Request
from vercel.oidc import decode_oidc_payload, get_vercel_oidc_token
# async
# from vercel.oidc.aio import get_vercel_oidc_token

app = FastAPI()

@app.middleware("http")
async def vercel_context_middleware(request: Request, call_next: Callable):
    set_headers(request.headers)
    return await call_next(request)

@app.get("/oidc")
def oidc():
    token = get_vercel_oidc_token()
    payload = decode_oidc_payload(token)
    user_id = payload.get("user_id")
    project_id = payload.get("project_id")

    return {
        "user_id": user_id,
        "project_id" project_id,
    }

Notes:

  • When run locally, this requires a valid Vercel CLI login on the machine running the code for refresh.
  • Project info is resolved from .vercel/project.json.



Blob Storage

Requires BLOB_READ_WRITE_TOKEN to be set as an env var or token to be set when constructing a client

BlobClient and AsyncBlobClient keep a long-lived HTTP transport for the life of the client instance. Prefer with BlobClient(...) / async with AsyncBlobClient(...) or call close() / aclose() explicitly when done.

Sync

from vercel.blob import BlobClient

with BlobClient() as client:  # or BlobClient(token="...")
    # Create a folder entry, upload a local file, list, then download
    client.create_folder("examples/assets", overwrite=True)
    uploaded = client.upload_file(
        "./README.md",
        "examples/assets/readme-copy.txt",
        access="public",
        content_type="text/plain",
    )
    listing = client.list_objects(prefix="examples/assets/")
    client.download_file(uploaded.url, "/tmp/readme-copy.txt", overwrite=True)

Async usage:

import asyncio
from vercel.blob import AsyncBlobClient

async def main():
    async with AsyncBlobClient() as client:  # uses BLOB_READ_WRITE_TOKEN from env
        # Upload bytes
        uploaded = await client.put(
            "examples/assets/hello.txt",
            b"hello from python",
            access="public",
            content_type="text/plain",
        )

        # Inspect metadata, list, download bytes, then delete
        meta = await client.head(uploaded.url)
        listing = await client.list_objects(prefix="examples/assets/")
        content = await client.get(uploaded.url)
        await client.delete([b.url for b in listing.blobs])

asyncio.run(main())

Synchronous usage:

from vercel.blob import BlobClient

with BlobClient() as client:  # or BlobClient(token="...")
    # Create a folder entry, upload a local file, list, then download
    client.create_folder("examples/assets", overwrite=True)
    uploaded = client.upload_file(
        "./README.md",
        "examples/assets/readme-copy.txt",
        access="public",
        content_type="text/plain",
    )
    listing = client.list_objects(prefix="examples/assets/")
    client.download_file(uploaded.url, "/tmp/readme-copy.txt", overwrite=True)

Multipart Uploads

For large files, the SDK provides three approaches with different trade-offs:

1. Automatic (Simplest)

The SDK handles everything automatically:

from vercel.blob import auto_multipart_upload

# Synchronous
result = auto_multipart_upload(
    "large-file.bin",
    large_data,  # bytes, file object, or iterator
    part_size=8 * 1024 * 1024,  # 8MB parts (default)
)

# Asynchronous
result = await auto_multipart_upload_async(
    "large-file.bin",
    large_data,
)
2. Uploader Pattern (Recommended)

A middle-ground that provides a clean API while giving you control over parts and concurrency:

from vercel.blob import BlobClient, create_multipart_uploader

# Create the uploader (initializes the upload)
with BlobClient() as client:
    uploader = client.create_multipart_uploader(
        "large-file.bin",
        content_type="application/octet-stream",
    )

    # Upload parts (you control when and how)
    parts = []
    for i, chunk in enumerate(chunks, start=1):
        part = uploader.upload_part(i, chunk)
        parts.append(part)

    # Complete the upload
    result = uploader.complete(parts)

Async version with concurrent uploads:

from vercel.blob import AsyncBlobClient, create_multipart_uploader_async

async with AsyncBlobClient() as client:
    uploader = await client.create_multipart_uploader("large-file.bin")

    # Upload parts concurrently
    tasks = [uploader.upload_part(i, chunk) for i, chunk in enumerate(chunks, start=1)]
    parts = await asyncio.gather(*tasks)

    # Complete
    result = await uploader.complete(parts)

The uploader pattern is ideal when you:

  • Want to control how parts are created (e.g., stream from disk, manage memory)
  • Need custom concurrency control
  • Want a cleaner API than the manual approach

Notes:

  • Part numbers must be in the range 1..10,000.
  • add_random_suffix defaults to True for the uploader (matches TS SDK); manual create defaults to False.
  • Abort/cancel: an abortable uploader API is not yet exposed (future enhancement).
3. Manual (Most Control)

Full control over each step, but more verbose:

from vercel.blob import (
    create_multipart_upload,
    upload_part,
    complete_multipart_upload,
)

# Phase 1: Create
resp = create_multipart_upload("large-file.bin")
upload_id = resp["uploadId"]
key = resp["key"]

# Phase 2: Upload parts
part1 = upload_part(
    "large-file.bin",
    chunk1,
    upload_id=upload_id,
    key=key,
    part_number=1,
)
part2 = upload_part(
    "large-file.bin",
    chunk2,
    upload_id=upload_id,
    key=key,
    part_number=2,
)

# Phase 3: Complete
result = complete_multipart_upload(
    "large-file.bin",
    [part1, part2],
    upload_id=upload_id,
    key=key,
)

See examples/multipart_uploader.py for complete working examples.

Development

  • Lint/typecheck/tests:
uv pip install -e .[dev]
uv run ruff format --check && uv run ruff check . && uv run mypy src && uv run pytest -v
  • CI runs lint, typecheck, examples as smoke tests, and builds wheels.
  • Publishing: push a tag (vX.Y.Z) that matches project.version to publish via PyPI Trusted Publishing.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vercel-0.5.2.tar.gz (69.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vercel-0.5.2-py3-none-any.whl (84.4 kB view details)

Uploaded Python 3

File details

Details for the file vercel-0.5.2.tar.gz.

File metadata

  • Download URL: vercel-0.5.2.tar.gz
  • Upload date:
  • Size: 69.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for vercel-0.5.2.tar.gz
Algorithm Hash digest
SHA256 21e5691813d2f005d2c76c595f7c5aa641928dd3298dbd4386d8acccfcdea313
MD5 53c22095253ff47837add349b87aa0e8
BLAKE2b-256 08dfb0b255279090599d35f252db06dc8f481e35102bd9ac1364b5e48a142245

See more details on using hashes here.

Provenance

The following attestation bundles were made for vercel-0.5.2.tar.gz:

Publisher: publish.yml on vercel/vercel-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file vercel-0.5.2-py3-none-any.whl.

File metadata

  • Download URL: vercel-0.5.2-py3-none-any.whl
  • Upload date:
  • Size: 84.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for vercel-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 74ff3a0caee68f79b097d9eabe49541090aab2708a6d1606bf6c85e4d7c827fe
MD5 5c265bfd71bb8fd06073f06d89882cf8
BLAKE2b-256 bff659098d7d9de69965aa48b3f327bfe4e143c51a67147eb6c0c294bbdcdee6

See more details on using hashes here.

Provenance

The following attestation bundles were made for vercel-0.5.2-py3-none-any.whl:

Publisher: publish.yml on vercel/vercel-py

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page