Skip to main content

A minimal, stateless, S3-compatible object storage server. Buckets are folders, objects are files.

Project description

nanio

CI PyPI version Python versions License: Apache 2.0 Coverage: 100% Ruff

A minimal, stateless, S3-compatible object storage server.

Buckets are folders, objects are files, the entire backing store is a flat POSIX filesystem. Install with pipx, run with one command, point any official S3 client at it.

pipx install nanio
export NANIO_ACCESS_KEY=minioadmin NANIO_SECRET_KEY=minioadmin
nanio serve --data-dir ./nanio-data

In another shell, point the AWS CLI at it. The aws CLI only reads its own AWS_* env vars — it doesn't know about NANIO_* — so you have to export the same credentials under the names the AWS SDK expects:

export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=minioadmin
export AWS_DEFAULT_REGION=us-east-1

aws --endpoint-url http://localhost:9000 s3 mb s3://test
aws --endpoint-url http://localhost:9000 s3 cp README.md s3://test/
aws --endpoint-url http://localhost:9000 s3 ls s3://test/

That's it. The official aws-cli, boto3, and s3cmd Just Work against nanio with no special-casing.

Why nanio

Early MinIO was beautifully simple: a single binary, a single command, an S3-compatible HTTP server backed by the filesystem. Modern MinIO has grown into a feature-rich product with erasure coding, IAM, lifecycle policies, replication, and a console UI. That's the right call for them — and the wrong call for the dozens of small use cases that just need an S3 endpoint in front of a directory: local development, CI fixtures, edge caches, short-lived test environments, simple backup targets.

nanio is the small thing. It does the S3 wire protocol over a flat filesystem, and nothing else. The whole codebase is a few thousand lines of Python. There is no console, no IAM, no versioning, no lifecycle, no replication, no encryption-at-rest. There is one CLI command (nanio serve) and two environment variables.

Features

  • ✅ S3 wire compatibility for the operations the official clients actually use:
    • ListBuckets, CreateBucket, DeleteBucket, HeadBucket, GetBucketLocation
    • PutObject, GetObject, HeadObject, DeleteObject, DeleteObjects (batch)
    • ListObjectsV2 with prefix, delimiter, pagination, encoding-type
    • CopyObject
    • Multipart upload: CreateMultipartUpload, UploadPart, CompleteMultipartUpload, AbortMultipartUpload, ListParts, ListMultipartUploads
    • Presigned URLs for GET and PUT
    • Range requests (206 Partial Content)
  • ✅ AWS Signature V4 verification (header form + presigned URLs + streaming STREAMING-AWS4-HMAC-SHA256-PAYLOAD)
  • ✅ Streaming uploads and downloads — server memory does not scale with object size
  • ✅ Stateless — point N processes at a shared filesystem and put any TCP load balancer in front
  • ✅ Single-user (env vars) or multi-user (TOML credentials file)

Non-features

These are deliberately out of scope. nanio is intentionally small.

  • ❌ No TLS — terminate HTTPS upstream (nginx, caddy, traefik)
  • ❌ No IAM, no policies, no ACLs — credentials are bearer tokens
  • ❌ No versioning, lifecycle, replication, or event notifications
  • ❌ No web console, no admin API, no metrics endpoint (stick a reverse proxy with metrics in front)
  • ❌ No encryption at rest — use a filesystem that does it (LUKS, dm-crypt, ZFS native)
  • ❌ No erasure coding — use a filesystem that does it (ZFS, btrfs RAID, mdraid)
  • ❌ Not supported on Windows (POSIX rename semantics, os.pread)

If you need any of those, run real MinIO, Ceph RGW, or AWS S3.

Installation

pipx install nanio

Or via uv:

uv tool install nanio

Both options put a nanio binary on your $PATH.

Configuration

nanio ships with two subcommands: serve (run the server) and install (install a systemd unit + generate an options file).

nanio serve [OPTIONS]

Options:
  --options PATH             TOML options file with [server] settings and
                             [[users]] credentials   [env: NANIO_OPTIONS_FILE]
  --data-dir PATH            Root directory for buckets   [env: NANIO_DATA_DIR]   default: ./nanio-data
  --host TEXT                Bind host                    [env: NANIO_HOST]       default: 0.0.0.0
  --port INTEGER             Bind port                    [env: NANIO_PORT]       default: 9000
  --workers INTEGER          uvicorn workers              [env: NANIO_WORKERS]    default: 1
  --region TEXT              S3 region to report          [env: NANIO_REGION]     default: us-east-1
  --log-level [debug|info|warning|error]                  default: info
  --no-access-log            Disable per-request logs
  --version
  --help

Configuration sources are merged with this precedence: CLI flag > NANIO_* env var > options file > built-in default.

Single-user (env vars)

export NANIO_ACCESS_KEY=minioadmin
export NANIO_SECRET_KEY=minioadmin
nanio serve --data-dir ./data

Multi-user (options file)

The unified options file holds both server tunables and credentials:

# nanio-options.toml
[server]
data_dir = "/var/lib/nanio"
host     = "0.0.0.0"
port     = 9000
region   = "us-east-1"

[[users]]
access_key = "alice"
secret_key = "alice-very-long-secret"

[[users]]
access_key = "bob"
secret_key = "bob-very-long-secret"
nanio serve --options nanio-options.toml

CLI flags work alongside the file — nanio serve --options options.toml --port 8080 overrides the port from the file, for example.

If neither env vars nor an options file are configured, nanio refuses to start. There is no anonymous mode.

Install as a systemd service

The easy path on a Linux box is nanio install, which generates a random (access_key, secret_key) pair, writes them into a complete options file at /etc/nanio/options.toml, and installs a hardened systemd unit at /etc/systemd/system/nanio.service. Run as root:

sudo nanio install

It prints the generated credentials once — make a note of them — and the next steps:

sudo useradd --system --no-create-home --shell /usr/sbin/nologin nanio
sudo chown -R nanio:nanio /var/lib/nanio
sudo systemctl daemon-reload
sudo systemctl enable --now nanio

After that, edit /etc/nanio/options.toml whenever you want to rotate keys, add users, or change data_dir/host/port/region, then sudo systemctl restart nanio.

nanio install accepts --prefix, --data-dir, --bin, --user, --host, --port, and --force if you need to customize the unit or dry-run into a scratch directory.

Scaling out

nanio holds zero in-process state. You scale it horizontally by:

  1. Putting all --data-dirs on a shared filesystem (NFSv4, cephfs, or any POSIX-compliant network mount with atomic rename).
  2. Running nanio serve on N machines pointing at that mount.
  3. Putting any TCP load balancer in front (nginx, HAProxy, AWS NLB).
upstream nanio {
    server node1:9000;
    server node2:9000;
    server node3:9000;
}

server {
    listen 443 ssl http2;
    server_name s3.example.com;

    ssl_certificate /etc/ssl/example.com.pem;
    ssl_certificate_key /etc/ssl/example.com.key;

    client_max_body_size 0;          # let nanio handle huge uploads
    proxy_request_buffering off;     # stream the request body
    proxy_buffering off;             # stream the response body

    location / {
        proxy_pass http://nanio;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

All nanio nodes must run NTP/chrony — SigV4 enforces a 15-minute clock-skew window between client and server, and we do not widen it.

Storage layout

Everything under --data-dir is browsable with normal Unix tools:

nanio-data/
├── widgets/                          # bucket
│   ├── photos/2026/cat.jpg           # object
│   ├── data.bin                      # object
│   └── .nanio-meta/                  # sidecar metadata (one .json per object)
│       ├── photos/2026/cat.jpg.json
│       └── data.bin.json
└── .nanio/
    └── multipart/                    # in-progress multipart uploads
        └── <upload-id>/
            ├── init.json
            └── parts/000001.bin

You can cat, cp, rsync, and tar your data directly. Backups are just filesystem backups.

Performance notes

nanio uses os.scandir for listing and os.pread/os.sendfile for streaming I/O. It has been validated with locust load tests at hundreds of requests per second per worker. See tests/load/README.md for the scenarios and how to run them.

A single bucket with millions of objects in a single directory is the filesystem's problem, not nanio's. ext4 and XFS handle millions of entries with htree, but performance degrades past a few million entries in one directory. The standard fix is the same as on AWS S3: use prefixed keys (logs/2026/04/08/... instead of one flat directory).

License

Apache 2.0. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nanio-0.1.4.tar.gz (53.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nanio-0.1.4-py3-none-any.whl (67.2 kB view details)

Uploaded Python 3

File details

Details for the file nanio-0.1.4.tar.gz.

File metadata

  • Download URL: nanio-0.1.4.tar.gz
  • Upload date:
  • Size: 53.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nanio-0.1.4.tar.gz
Algorithm Hash digest
SHA256 55be2c365c6326a7b18b0410d26b3c2495de6e9ebf9de7b7a1753a7dd5b7bf38
MD5 c7e2c4236851d508e1533ca3b447fcd6
BLAKE2b-256 4f6c0a3cc43d667b1a064cfcc762de2b3a29dfac95528ea471cec8015ff53e12

See more details on using hashes here.

Provenance

The following attestation bundles were made for nanio-0.1.4.tar.gz:

Publisher: publish.yml on apocas/nanio

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file nanio-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: nanio-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 67.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nanio-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 b236097e1ab7a07ed585ff316ef8b3e99852b5b6e3d41c39baf97de7fba82ef2
MD5 25a299d70a9f691d3bfb7d8754eec77c
BLAKE2b-256 84a9442c1e5bd26d94a2a4f1e2af3cbc16dcb092039050fb2937bf1bd704b160

See more details on using hashes here.

Provenance

The following attestation bundles were made for nanio-0.1.4-py3-none-any.whl:

Publisher: publish.yml on apocas/nanio

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page