Skip to main content

Git-compatible repositories backed by your own storage: S3, R2, Tigris, MinIO, Postgres, SFTP, local disk, NFS.

Project description

Trunks

Git repos backed by your own storage. Run trunks, keep using git, and pushes go to S3, R2, Tigris, MinIO, Postgres, SFTP, local disk, or a file share. GitHub still works too if you want it, as a mirror.

Trunks storage topology

Use Cases

  • Run agents in short-lived sandboxes and save their work before the sandbox is destroyed.
  • Let one agent push work and another agent pull it on a different machine.
  • Sync repo state to S3, R2, Tigris, MinIO, Postgres, SFTP, local disk, or a file share.
  • Keep customer code in your own bucket, VPC, or enterprise storage.
  • Mirror finished work to GitHub for PRs and review when you are ready.

Install

pip install trunks

60-second start (local disk, no cloud)

cd myrepo
trunks init
trunks storage add --name primary --backend local --path ~/trunks-store

git checkout -b feature/auth
git add . && git commit -m "fix auth"
git push

That push lands in ~/trunks-store/trunks/myrepo.trunk/. No origin needed.

Cloud start (S3, R2, Tigris, MinIO)

export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_REGION=us-east-1

cd myrepo
trunks init
trunks storage add --name primary --backend s3 --bucket my-bucket

Here is what happens:

  1. Trunks turns the repo name into a path: myrepo becomes s3://my-bucket/trunks/myrepo.trunk.
  2. It does a real read, write, and list on the bucket to make sure the credentials work.
  3. It saves the target locally in .trunks/myrepo.trunk.

Credentials stay local. If you pass inline flags like --access-key, --secret-key, or --password, Trunks saves them only in the local .trunks/<repo>.trunk database, masks them in trunks storage show, and never writes them to the remote backend. You can also use env vars, CI secrets, IAM roles, or your SSH agent instead.

Now use git like you always do:

git checkout -b feature/auth
git add . && git commit -m "fix auth"
git push

The push lands in s3://my-bucket/trunks/myrepo.trunk/.

Multi-machine handoff

Sandbox-to-sandbox replication via Trunks

One sandbox pushes a branch. Another sandbox pulls it, keeps working, pushes back. The map of which storage holds what lives inside the trunk, so a fresh sandbox only needs the primary URL to find the mirrors.

Multiple accounts, multiple buckets

Per-storage env vars beat the globals. This is how CI runs against more than one account at once:

export TRUNKS_STORAGE_PRIMARY_ACCESS_KEY=AKIA...
export TRUNKS_STORAGE_PRIMARY_SECRET_KEY=...
export TRUNKS_STORAGE_BACKUP_ACCESS_KEY=AKIA...    # different account
export TRUNKS_STORAGE_BACKUP_SECRET_KEY=...

Each named storage looks up TRUNKS_STORAGE_<NAME>_<KEY> first, then falls back to the provider defaults like AWS_* or R2_*.

Mirrors

Push to more than one place at the same time:

trunks storage add --name primary --backend s3  --bucket company-primary
trunks storage add --name backup  --backend r2  --bucket company-backup --account-id $R2_ACCOUNT_ID --mirror
trunks storage add --name nas     --backend local --path /mnt/company/trunks --mirror

Pushes are strict. If backup is down, the push fails. No silent half-syncs.

Storage health

trunks storage list
trunks storage show primary         # masks secrets
trunks storage ping                 # primary + mirrors
trunks storage ping primary
trunks storage ping s3://my-bucket  # test before saving

Storage layout

  • Backend: the storage root, like s3://company-code.
  • Trunk: one repo inside that backend, like s3://company-code/trunks/lazy-lms.trunk/.

Trunks stores git-compatible blobs, trees, commits, and refs. The on-disk layout is its own: objects are content-addressed and shared across branches, segments are batched, large blobs are chunked, every read is hash-verified. Not git LFS. GitHub can still be a mirror for review; Trunks keeps the repo data in the storage you chose.

CLI

Command What it does
trunks drop into the managed shell
trunks init create .trunks/<repo>.trunk locally
trunks storage add --name <name> --backend <type> ... connect a named primary
trunks storage add ... --mirror add a mirror
trunks storage list list the storage you have
trunks storage show <name> show one target with secrets masked
trunks storage ping [<name>] check the primary and the mirrors
trunks storage remove <name> drop a target
trunks status repo, branch, storage, mirrors, dirty state
trunks push / pull / fetch sync with the storage you have
trunks check [--clean] verify the local repo and optionally GC dead objects

Inside trunks, use git the normal way: git status, git add, git commit, git push.

Backends

Backend URL form Setup
S3 s3://bucket/path docs/backends/s3.md
MinIO s3://bucket --endpoint http://host:9000 docs/backends/s3.md
Cloudflare R2 r2://bucket/path docs/backends/s3.md
Tigris tigris://bucket docs/backends/s3.md
Backblaze B2 b2://bucket docs/backends/s3.md
Wasabi wasabi://bucket docs/backends/s3.md
DigitalOcean Spaces spaces://bucket docs/backends/s3.md
Azure Blob azure://account/container/path docs/backends/azure.md
GCS gcs://bucket/path docs/backends/gcs.md
SFTP sftp://user@host/path docs/backends/sftp.md
Postgres postgres://user:pw@host/db/trunks/repo.trunk docs/backends/postgres.md
Local disk local:///path/to/dir docs/backends/local.md
NFS / SMB file:///mnt/share docs/backends/local.md
Memory (tests) memory:// docs/backends/memory.md

Python

from trunks import Trunk

with Trunk(backend="s3://company-code", name="lazy-lms") as trunk:
    trunk.pull()
    print(trunk.read("README.md"))

Async works too. The methods notice a running event loop and pick sync or async on their own:

import asyncio
from trunks import Trunk

async def serve_file(path: str) -> bytes:
    async with Trunk(backend="s3://company-code", name="lazy-lms") as trunk:
        await trunk.pull()
        return await trunk.read(path)

asyncio.run(serve_file("README.md"))

License

MIT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

trunks-0.1.0.tar.gz (53.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

trunks-0.1.0-py3-none-any.whl (70.6 kB view details)

Uploaded Python 3

File details

Details for the file trunks-0.1.0.tar.gz.

File metadata

  • Download URL: trunks-0.1.0.tar.gz
  • Upload date:
  • Size: 53.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for trunks-0.1.0.tar.gz
Algorithm Hash digest
SHA256 dabda3f822e6a8aa344b3195d484b3204f086ce90e3a6ee770f5353a760603e3
MD5 0972b0ba979a709e9155a4e18e59f8ba
BLAKE2b-256 0b014b950da7bde8eef21824773f4f0f110d1912543c221b07efbef06afeb11d

See more details on using hashes here.

File details

Details for the file trunks-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: trunks-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 70.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for trunks-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a4fcbff1aa9f7f9b3b438c1d70f05f992591fe92233ed7fd58f168ae030ed066
MD5 88402bd0c1a0dc43b66ef06ff2fd9da2
BLAKE2b-256 eb2186c236641facc16d47fe9b0de421c7fb7320bdce24d4997eb88a7c0caf57

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page