Per-tenant PostgreSQL schema manager for Supabase. Manages isolated schemas within a single Supabase instance.
Project description
supabase-schemas
Per-tenant PostgreSQL schema manager for Supabase.
Run 80+ isolated app schemas inside a single Supabase instance.
Each tenant gets its own app_<id> schema with migration tracking and
an auth.users view — without paying for multiple Supabase projects.
Quick start
# Clone and install
git clone https://github.com/nometria/supabase-schemas
cd supabase-schemas
pip install -e .
# Set your database URL
export DATABASE_URL=postgresql://postgres.your-ref:password@aws-0-us-east-1.pooler.supabase.com:5432/postgres
# Create a tenant schema
supabase-schemas create acme-corp
# List all tenant schemas
supabase-schemas list
# Run migrations for a tenant
supabase-schemas migrate acme-corp --migrations-dir ./migrations
# Clone schema structure to a new tenant (no data)
supabase-schemas clone acme-corp acme-staging
# Export a tenant's data to JSON
supabase-schemas export acme-corp --output acme.json
# Drop a tenant schema
supabase-schemas drop acme-corp --force
# Run tests
pytest tests/ -v
Why
Supabase multi-tenancy via schema-per-tenant is the most cost-effective pattern for building multi-tenant SaaS. But there's no official CLI for it. This fills that gap.
Install
pip install supabase-schemas
# or from source:
git clone https://github.com/nometria/supabase-schemas
cd supabase-schemas
pip install -e .
Setup
Set DATABASE_URL to your Supabase direct PostgreSQL connection:
# .env
DATABASE_URL=postgresql://postgres.your-ref:password@aws-0-us-east-1.pooler.supabase.com:5432/postgres
You also need this function in your Supabase SQL:
create or replace function public.ensure_schema_exposed(schema_name text)
returns void language plpgsql security definer as $$
begin
execute format('create schema if not exists %I', schema_name);
execute format('grant usage on schema %I to anon, authenticated, service_role', schema_name);
execute format('alter default privileges in schema %I grant all on tables to anon, authenticated, service_role', schema_name);
end $$;
Commands
# Create a new tenant schema
supabase-schemas create my-tenant
# Create and run migrations immediately
supabase-schemas create my-tenant --migrations-dir ./migrations
# List all tenant schemas
supabase-schemas list
supabase-schemas list --details # also shows tables
# Run migrations for an existing schema
supabase-schemas migrate my-tenant --migrations-dir ./migrations
# Clone schema structure from one tenant to another (no data)
supabase-schemas clone acme-prod acme-staging
# Export a tenant's data to JSON
supabase-schemas export my-tenant # prints to stdout
supabase-schemas export my-tenant --output dump.json # writes to file
# Drop a schema (DESTRUCTIVE — asks for confirmation)
supabase-schemas drop my-tenant
supabase-schemas drop my-tenant --force # skip confirmation
How it works
create→ callspublic.ensure_schema_exposed(schema_name), creates_migrationstracking table, injectsauth.usersviewmigrate→ reads.sqlfiles sorted alphabetically, skips already-applied, records each in_migrationslist→ queriesinformation_schema.schematafor allapp_*schemasdrop→DROP SCHEMA CASCADE(with confirmation prompt)clone→ creates target schema, then copies tables (CREATE TABLE ... (LIKE source INCLUDING ALL)), views (rewrites definitions to reference target), and functions (replaces schema qualifiers). No data is copied.export→ reads every table in the schema and serializes all rows to JSON with a_manifestheader containing schema name, timestamp, per-table row counts, and total rows
Each schema gets: app_<sanitized-id> (lowercase, underscores, app_ prefix).
Clone details
clone copies the full schema structure without any row data:
- Tables -- uses
CREATE TABLE ... (LIKE source INCLUDING ALL)to preserve indexes, constraints, defaults, and sequences - Views -- reads
information_schema.viewsand rewrites the definition to reference the target schema - Functions -- reads
pg_proc/pg_get_functiondefand replaces the schema qualifier
# Clone production structure into a staging schema
supabase-schemas clone acme-prod acme-staging
# Output:
# Cloned table: orders
# Cloned table: products
# Cloned view: order_summary
# Cloned function: calculate_total
# Cloned schema 'app_acme_prod' -> 'app_acme_staging' (2 tables, 1 views, 1 functions, structure only)
Export details
export dumps all table data to JSON with a manifest:
# Print to stdout
supabase-schemas export acme-prod
# Write to file
supabase-schemas export acme-prod --output acme-backup.json
Output format:
{
"_manifest": {
"schema": "app_acme_prod",
"exported_at": "2025-06-15T12:00:00+00:00",
"tables": {
"orders": 42,
"products": 15
},
"total_rows": 57
},
"orders": [
{"id": 1, "total": 99.50, "created_at": "2025-01-01"}
],
"products": [
{"id": 1, "name": "Widget", "price": 9.99}
]
}
Non-JSON-serializable values (bytes, datetime, UUID, Decimal) are automatically converted to strings.
Immediate next steps
- Publish to PyPI as
supabase-schemas - Write blog post: "How to run 80+ isolated Postgres schemas in one Supabase project" — drives SEO + inbound
Commercial viability
- Low standalone (open source is the right play)
- High as a content marketing asset — "Supabase multi-tenant" is a top-10 searched pattern
- Drives inbound to core platform: "now that you have your schemas sorted, let's self-host the whole thing"
Example output
Running pytest tests/ -v:
============================= test session starts ==============================
platform darwin -- Python 3.13.9, pytest-9.0.2, pluggy-1.5.0
cachedir: .pytest_cache
rootdir: /tmp/ownmy-releases/supabase-schemas
configfile: pyproject.toml
plugins: anyio-4.12.1, cov-7.1.0
collecting ... collected 4 items
tests/test_schema_manager.py::test_schema_name_format PASSED [ 25%]
tests/test_schema_manager.py::test_schema_name_sanitizes_special_chars PASSED [ 50%]
tests/test_schema_manager.py::test_import_succeeds_without_psycopg2 PASSED [ 75%]
tests/test_schema_manager.py::test_tenant_id_validation PASSED [100%]
============================== 4 passed in 0.02s ===============================
See examples/sample-schema-list.txt for what supabase-schemas list output looks like.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file supabase_schemas-0.3.0.tar.gz.
File metadata
- Download URL: supabase_schemas-0.3.0.tar.gz
- Upload date:
- Size: 13.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
87c9106939f9ed3079b4ca4c887eb9e2629d51768924464463dbce110cf1236f
|
|
| MD5 |
86a082a88d2e66c4c8703784f34015a7
|
|
| BLAKE2b-256 |
fbb20290eb11718b928f23d00cc5eb637829b09a8a6e5936ab23786fb09767ae
|
File details
Details for the file supabase_schemas-0.3.0-py3-none-any.whl.
File metadata
- Download URL: supabase_schemas-0.3.0-py3-none-any.whl
- Upload date:
- Size: 12.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
45cfa35e39b263ab0c76e6ea82dc8ae2fcef1b475cf01f4fe6dad90a0e23ee74
|
|
| MD5 |
b6d7561366302d59c8ac033d295b1ebb
|
|
| BLAKE2b-256 |
af6b93d740fbe6dacb3902bf8ef059d0072fcb64c79af06397f7ff61b153fc84
|