Opteryx Cloud Catalog
Project description
pyiceberg-firestore-gcs
A Firestore + Google Cloud Storage (GCS) backed implementation of the PyIceberg catalog interface. This package provides a straightforward, opinionated catalog implementation for storing table metadata documents in Firestore while storing the Iceberg table metadata JSON in GCS.
This project is intended to be used as a catalog component for PyIceberg in GCP-based environments.
Features ✅
- Firestore-backed catalog and namespace storage
- GCS-based Iceberg table metadata storage (with optional compatibility mode)
- GCS-based table metadata storage; export/import utilities provide Iceberg Avro interoperability
- Table creation, registration, listing, loading, renaming, and deletion
- Commit operations that write updated metadata to GCS and persist references in Firestore
- Simple, opinionated defaults (e.g., default GCS location derived from catalog properties)
- Lightweight schema handling compatible with PyIceberg (supports pyarrow schemas and PyIceberg Schema)
Quick start 💡
-
Ensure you have GCP credentials available to the environment. Typical approaches:
- Set
GOOGLE_APPLICATION_CREDENTIALSto a service account JSON key file, or - Use
gcloud auth application-default loginfor local development.
- Set
-
Install locally (or publish to your package repo):
python -m pip install -e .
- Create a
FirestoreCatalogand use it in your application:
from pyiceberg_firestore_gcs import create_catalog
from pyiceberg.schema import Schema, NestedField
from pyiceberg.types import IntegerType, StringType
catalog = create_catalog(
"my_catalog",
firestore_project="my-gcp-project",
gcs_bucket="my-default-bucket",
)
# Create a namespace
catalog.create_namespace("example_namespace")
# Create a simple PyIceberg schema
schema = Schema(
NestedField(field_id=1, name="id", field_type=IntegerType(), required=True),
NestedField(field_id=2, name="name", field_type=StringType(), required=False),
)
# Create a new table (metadata written to a GCS path derived from the bucket property)
table = catalog.create_table(("example_namespace", "users"), schema)
# Or register a table if you already have a metadata JSON in GCS
catalog.register_table(("example_namespace", "events"), "gs://my-bucket/path/to/events/metadata/00000001.json")
# Load a table
tbl = catalog.load_table(("example_namespace", "users"))
print(tbl.metadata)
Configuration and environment 🔧
- GCP authentication: Use
GOOGLE_APPLICATION_CREDENTIALSor Application Default Credentials firestore_projectandfirestore_databasecan be supplied when creating the cataloggcs_bucketis recommended to allowcreate_tableto write metadata automatically; otherwise passlocationexplicitly tocreate_table- The catalog does not write Iceberg Avro/manifest-list artifacts in the hot path; use
export_to_iceberg/import_from_icebergfor interoperability
Example environment variables:
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
export GOOGLE_CLOUD_PROJECT="my-gcp-project"
Iceberg interoperability
This catalog implementation does not write Iceberg Avro manifest-list/Avro manifest files or Iceberg metadata JSON in the hot path. Instead, table metadata is stored in Firestore and the runtime writes a consolidated Parquet manifest for fast query planning.
If you need full Iceberg-compatible artifacts for other engines or tools, use the
export_to_iceberg utility to generate Avro manifests and manifest-lists from
the Parquet-first storage layout. To ingest existing Iceberg Avro artifacts into this
catalog, use import_from_iceberg which will convert Avro manifests into the
Parquet manifest + Firestore snapshot representation used here.
API overview 📚
The package exports a factory helper create_catalog and the FirestoreCatalog class.
Key methods include:
create_namespace(namespace, properties={}, exists_ok=False)drop_namespace(namespace)list_namespaces()create_table(identifier, schema, location=None, partition_spec=None, sort_order=None, properties={})register_table(identifier, metadata_location)load_table(identifier)list_tables(namespace)drop_table(identifier)rename_table(from_identifier, to_identifier)commit_table(table, requirements, updates)create_view(identifier, sql, schema=None, author=None, description=None, properties={})load_view(identifier)list_views(namespace)view_exists(identifier)drop_view(identifier)update_view_execution_metadata(identifier, row_count=None, execution_time=None)
Views 👁️
Views are SQL queries stored in the catalog that can be referenced like tables. Each view includes:
- SQL statement: The query that defines the view
- Schema: The expected result schema (optional but recommended)
- Metadata: Author, description, creation/update timestamps
- Execution history: Last run time, row count, execution time
Example usage:
from pyiceberg.schema import Schema, NestedField
from pyiceberg.types import IntegerType, StringType
# Create a schema for the view
schema = Schema(
NestedField(field_id=1, name="user_id", field_type=IntegerType(), required=True),
NestedField(field_id=2, name="username", field_type=StringType(), required=False),
)
# Create a view
view = catalog.create_view(
identifier=("my_namespace", "active_users"),
sql="SELECT user_id, username FROM users WHERE active = true",
schema=schema,
author="data_team",
description="View of all active users in the system"
)
# Load a view
view = catalog.load_view(("my_namespace", "active_users"))
print(f"SQL: {view.sql}")
print(f"Schema: {view.metadata.schema}")
# Update execution metadata after running the view
catalog.update_view_execution_metadata(
("my_namespace", "active_users"),
row_count=1250,
execution_time=0.45
)
Notes about behavior:
create_tablewill try to infer a default GCS location using the providedgcs_bucketproperty iflocationis omitted.register_tablevalidates that the providedmetadata_locationpoints to an existing GCS blob.- Views are stored as Firestore documents with complete metadata including SQL, schema, authorship, and execution history.
- Table transactions are intentionally unimplemented.
Development & Linting 🧪
This package includes a small Makefile target to run linting and formatting tools (ruff, isort, pycln).
Install dev tools and run linters with:
python -m pip install --upgrade pycln isort ruff
make lint
Running tests (if you add tests):
python -m pytest
Compaction 🔧
This catalog supports small file compaction to improve query performance. See COMPACTION.md for detailed design documentation.
Quick Start
from pyiceberg_firestore_gcs import create_catalog
from pyiceberg_firestore_gcs.compaction import compact_table, get_compaction_stats
catalog = create_catalog(...)
# Check if compaction is needed
table = catalog.load_table(("namespace", "table_name"))
stats = get_compaction_stats(table)
print(f"Small files: {stats['small_file_count']}")
# Run compaction
result = compact_table(catalog, ("namespace", "table_name"))
print(f"Compacted {result.files_rewritten} files")
Configuration
Control compaction behavior via table properties:
table = catalog.create_table(
identifier=("namespace", "table_name"),
schema=schema,
properties={
"compaction.enabled": "true",
"compaction.min-file-count": "10",
"compaction.max-small-file-size-bytes": "33554432", # 32 MB
"write.target-file-size-bytes": "134217728" # 128 MB
}
)
Limitations & KNOWN ISSUES ⚠️
- No support for table-level transactions.
create_table_transactionraisesNotImplementedError. - The catalog stores metadata location references in Firestore; purging metadata files from GCS is not implemented.
- This is an opinionated implementation intended for internal or controlled environments. Review for production constraints before use in multi-tenant environments.
Contributing 🤝
Contributions are welcome. Please follow these steps:
- Fork the repository and create a feature branch.
- Run and pass linting and tests locally.
- Submit a PR with a clear description of the change.
Please add unit tests and docs for new behaviors.
If you'd like, I can also add usage examples that show inserting rows using PyIceberg readers/writers, or add CI testing steps to the repository. ✅
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file opteryx_catalog-0.3.1.tar.gz.
File metadata
- Download URL: opteryx_catalog-0.3.1.tar.gz
- Upload date:
- Size: 62.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7b94dcef0db7fcc4eb7dba425fd66b75a3562f3987d8049b67da0440f7cb76f8
|
|
| MD5 |
db4d795473792d79ddb9ce14ba6fd028
|
|
| BLAKE2b-256 |
0dd849df2453a4be21a6b7284c9ee9ac4e620e0e438b6970f83785618c01524f
|
Provenance
The following attestation bundles were made for opteryx_catalog-0.3.1.tar.gz:
Publisher:
release.yaml on mabel-dev/pyiceberg-firestore-gcs
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opteryx_catalog-0.3.1.tar.gz -
Subject digest:
7b94dcef0db7fcc4eb7dba425fd66b75a3562f3987d8049b67da0440f7cb76f8 - Sigstore transparency entry: 781288991
- Sigstore integration time:
-
Permalink:
mabel-dev/pyiceberg-firestore-gcs@e03a7f1b86331cb55b6725f1cf49d2d49cc397ab -
Branch / Tag:
refs/tags/version-0.3.1 - Owner: https://github.com/mabel-dev
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yaml@e03a7f1b86331cb55b6725f1cf49d2d49cc397ab -
Trigger Event:
push
-
Statement type:
File details
Details for the file opteryx_catalog-0.3.1-py3-none-any.whl.
File metadata
- Download URL: opteryx_catalog-0.3.1-py3-none-any.whl
- Upload date:
- Size: 82.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
abda41b981ef0602d9fefaecc13864926695b7881a75806ce56e4fc8b8e46804
|
|
| MD5 |
3a60096e0da1dad6f272377994216806
|
|
| BLAKE2b-256 |
d0b1b0f7a68b92e586240cec71bc9ff55a958320d69e40568a3c65d142622369
|
Provenance
The following attestation bundles were made for opteryx_catalog-0.3.1-py3-none-any.whl:
Publisher:
release.yaml on mabel-dev/pyiceberg-firestore-gcs
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opteryx_catalog-0.3.1-py3-none-any.whl -
Subject digest:
abda41b981ef0602d9fefaecc13864926695b7881a75806ce56e4fc8b8e46804 - Sigstore transparency entry: 781288992
- Sigstore integration time:
-
Permalink:
mabel-dev/pyiceberg-firestore-gcs@e03a7f1b86331cb55b6725f1cf49d2d49cc397ab -
Branch / Tag:
refs/tags/version-0.3.1 - Owner: https://github.com/mabel-dev
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yaml@e03a7f1b86331cb55b6725f1cf49d2d49cc397ab -
Trigger Event:
push
-
Statement type: