Skip to main content

TUS resumable upload protocol plugin for Litestar

Project description

litestar-tus

CI Publish to PyPI PyPI version

TUS v1.0.0 resumable upload protocol plugin for Litestar with pluggable storage backends.

Installation

pip install litestar-tus

# With S3 support
pip install litestar-tus[s3]

Quick Start

from litestar import Litestar
from litestar_tus import TUSPlugin, TUSConfig

app = Litestar(
    plugins=[TUSPlugin(TUSConfig(path_prefix="/uploads", max_size=5 * 1024**3))]
)

This registers TUS protocol endpoints at /uploads/ supporting resumable file uploads.

Features

  • TUS v1.0.0 protocol compliance
  • Extensions: creation, creation-with-upload, termination, expiration, checksum, concatenation
  • Storage backends: local filesystem (default) and S3 (via boto3)
  • Concurrency safety: POSIX file locks (file backend) and S3 conditional writes via ETags (S3 backend)
  • Checksum verification: streaming SHA-1, SHA-256, and MD5 validation
  • Lifecycle events: hook into upload creation, progress, completion, and termination via Litestar's event system
  • Streaming: request bodies are streamed directly to storage — the S3 backend uses a rolling buffer that flushes multipart parts incrementally without buffering the full upload in memory

Configuration

TUSConfig(
    path_prefix="/uploads",       # URL prefix for TUS endpoints
    upload_dir="./uploads",       # Local storage directory (file backend)
    max_size=1024 * 1024 * 100,   # Maximum upload size in bytes (optional)
    expiration_seconds=86400,     # Upload expiration in seconds (default: 24h, None to disable)
    extensions=(                  # Protocol extensions to enable
        "creation",
        "creation-with-upload",
        "termination",
        "expiration",
        "checksum",
        "concatenation",
    ),
    storage_backend=None,         # Custom StorageBackend instance (default: FileStorageBackend)
    metadata_override=None,       # Optional hook to override Upload-Metadata based on the Request
)

Request Body Size

Litestar's built-in request_max_body_size defaults to ~9.5 MiB. If your TUS client sends chunks larger than this, PATCH requests will be rejected with 413 Content Too Large before reaching litestar-tus. Raise the limit on the Litestar app to match your expected chunk sizes:

app = Litestar(
    plugins=[TUSPlugin(TUSConfig(...))],
    request_max_body_size=1024 * 1024 * 100,  # 100 MiB
)

Storage Backends

File Backend (default)

Stores uploads on the local filesystem under upload_dir. Each upload produces three files:

File Purpose
<id> Upload data
<id>.info JSON metadata (offset, size, expiration, etc.)
<id>.lock POSIX advisory lock file

Concurrency is handled with fcntl.flock — the lock file is acquired exclusively before every write, and metadata is re-read under the lock to prevent TOCTOU races.

Limitations: fcntl.flock is POSIX-only (Linux/macOS) and only guarantees exclusive access on a single node. NFS and other network filesystems do not reliably support fcntl advisory locks. For multi-worker or multi-node deployments, use the S3 backend instead.

S3 Backend

Uses S3 multipart uploads with a rolling buffer to stream data into parts without full-stream buffering.

import boto3
from litestar_tus import TUSConfig, TUSPlugin
from litestar_tus.backends.s3 import S3StorageBackend

s3_client = boto3.client("s3")
backend = S3StorageBackend(
    client=s3_client,
    bucket="my-bucket",
    key_prefix="uploads/",
    part_size=10 * 1024 * 1024,  # 10 MiB (default), minimum 5 MiB
)

app = Litestar(
    plugins=[TUSPlugin(TUSConfig(storage_backend=backend))]
)

Each upload produces these S3 objects:

Object Purpose
<prefix><id> Assembled upload data (after multipart completion)
<prefix><id>.info JSON metadata
<prefix><id>.pending Temporary buffer for bytes not yet flushed as a part

Rolling Buffer

Incoming data accumulates in a buffer. Whenever the buffer reaches part_size, a multipart part is flushed to S3. Leftover bytes smaller than part_size are persisted as a .pending object and prepended to the buffer on the next write_chunk call. On finish(), any remaining pending data is flushed as the final part and complete_multipart_upload is called.

Optimistic Concurrency Control

The S3 backend uses two layers of concurrency protection:

  1. Process-local anyio.Lock — serializes concurrent writes to the same upload within a single worker process, avoiding unnecessary S3 round-trips.
  2. S3 conditional writes via ETags — provides cross-process and cross-node safety. The ETag of the .info object is tracked and passed as IfMatch on every put_object call. If another process modified the .info object in the meantime, S3 returns 412 Precondition Failed and the write is rejected with HTTP 409. New uploads use IfNoneMatch: * to prevent duplicate creation.

This means the S3 backend is safe to run with multiple worker processes without sticky sessions or external locks.

Checksum Verification

When the checksum extension is enabled (default), clients can send an Upload-Checksum header with PATCH or creation-with-upload requests:

Upload-Checksum: sha256 <base64-encoded-digest>

Supported algorithms: sha1, sha256, md5.

The digest is computed incrementally as data streams through — no extra buffering pass required. A mismatch returns HTTP 460 per the TUS protocol specification.

Concatenation

The concatenation extension (enabled by default) allows clients to upload file parts in parallel and then combine them into a single final upload. This is used by tus-js-client's parallelUploads option.

Protocol flow:

  1. Client creates N partial uploads with Upload-Concat: partial
  2. Client uploads data to each partial via PATCH (can be done in parallel)
  3. Client creates a final upload with Upload-Concat: final;/uploads/id1 /uploads/id2 ...
  4. Server concatenates all partial data in order; the final upload is immediately complete

Partial uploads support creation-with-upload (sending data in the POST body). Final uploads cannot be modified via PATCH after creation.

The S3 backend uses upload_part_copy to concatenate partials server-side without downloading and re-uploading data, running all copy operations in parallel. Partials smaller than 5 MiB (S3's minimum part size for copy operations) fall back to download and re-upload automatically.

Expiration

When expiration_seconds is set (default: 86400 / 24 hours), each upload receives an expires_at timestamp. Expired uploads are rejected with HTTP 410 (Gone) on HEAD, PATCH, and DELETE requests. The Upload-Expires header is included in responses so clients know the deadline.

Note: expired uploads are not automatically cleaned up from storage. Implement a background job or use S3 lifecycle rules to remove stale objects.

Events

Listen to upload lifecycle events:

from litestar.events import listener
from litestar_tus import TUSEvent, UploadInfo

@listener(TUSEvent.POST_FINISH)
async def on_upload_complete(upload_info: UploadInfo) -> None:
    print(f"Upload {upload_info.id} completed ({upload_info.offset} bytes)")

app = Litestar(
    plugins=[TUSPlugin()],
    listeners=[on_upload_complete],
)

Available events:

Event When
PRE_CREATE Before upload is created
POST_CREATE After upload is created
POST_RECEIVE After a data chunk is written
PRE_FINISH Before completing (assembling) the upload
POST_FINISH After the upload is completed
PRE_TERMINATE Before deleting an upload
POST_TERMINATE After an upload is deleted

All events receive upload_info: UploadInfo as a keyword argument.

Metadata Override

Override or inject Upload-Metadata using the incoming request before the upload is created:

from litestar import Request
from litestar_tus import TUSConfig, TUSPlugin

async def metadata_override(request: Request, metadata: dict[str, bytes]) -> dict[str, bytes]:
    metadata["user_id"] = request.headers.get("authorization", "").encode()
    return metadata

app = Litestar(
    plugins=[TUSPlugin(TUSConfig(metadata_override=metadata_override))],
)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litestar_tus-1.2.0.tar.gz (17.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

litestar_tus-1.2.0-py3-none-any.whl (22.9 kB view details)

Uploaded Python 3

File details

Details for the file litestar_tus-1.2.0.tar.gz.

File metadata

  • Download URL: litestar_tus-1.2.0.tar.gz
  • Upload date:
  • Size: 17.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for litestar_tus-1.2.0.tar.gz
Algorithm Hash digest
SHA256 34b0516883852fe4734b5fbc04aa0beda4a2ae6745a7c4707ba757324b8020cb
MD5 9d0fdff73d5da96c0d5fa70b80d18cb9
BLAKE2b-256 d4eb1eccb0bcaf90759662074616b6a78c135c55f671e4c6c4a9e6e229f2e911

See more details on using hashes here.

Provenance

The following attestation bundles were made for litestar_tus-1.2.0.tar.gz:

Publisher: cd.yaml on elohmeier/litestar-tus

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file litestar_tus-1.2.0-py3-none-any.whl.

File metadata

  • Download URL: litestar_tus-1.2.0-py3-none-any.whl
  • Upload date:
  • Size: 22.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for litestar_tus-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4cb33e34f06d37281dd88fdb54b6e4ed73006c4b72b48f343dbaa8f6d3b72fd3
MD5 5f9dc449e367f03f9608874d9e18a399
BLAKE2b-256 c4eb8e00f5bf99fcb3ff227efc88f03cd4f0df27583945ccc952c261a2815f1d

See more details on using hashes here.

Provenance

The following attestation bundles were made for litestar_tus-1.2.0-py3-none-any.whl:

Publisher: cd.yaml on elohmeier/litestar-tus

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page