Skip to main content

Opteryx Cloud Catalog

Project description

pyiceberg-firestore-gcs

A Firestore + Google Cloud Storage (GCS) backed implementation of a lightweight catalog interface. This package provides an opinionated catalog implementation for storing table metadata documents in Firestore and consolidated Parquet manifests in GCS.

This project is intended to be used as a catalog component in GCP-based environments and provides utilities to interoperate with Avro/manifest-based workflows when needed.


Features ✅

  • Firestore-backed catalog and collection storage
  • GCS-based table metadata storage (with optional compatibility mode)
  • GCS-based table metadata storage; export/import utilities provide Avro interoperability
  • Table creation, registration, listing, loading, renaming, and deletion
  • Commit operations that write updated metadata to GCS and persist references in Firestore
  • Simple, opinionated defaults (e.g., default GCS location derived from catalog properties)
  • Lightweight schema handling (supports pyarrow schemas)

Quick start 💡

  1. Ensure you have GCP credentials available to the environment. Typical approaches:

    • Set GOOGLE_APPLICATION_CREDENTIALS to a service account JSON key file, or
    • Use gcloud auth application-default login for local development.
  2. Install locally (or publish to your package repo):

python -m pip install -e .
  1. Create a FirestoreCatalog and use it in your application:
from pyiceberg_firestore_gcs import create_catalog
from pyiceberg.schema import Schema, NestedField
from pyiceberg.types import IntegerType, StringType

catalog = create_catalog(
	"my_catalog",
	firestore_project="my-gcp-project",
	gcs_bucket="my-default-bucket",
)

# Create a collection
catalog.create_collection("example_collection")

# Create a simple PyIceberg schema
schema = Schema(
	NestedField(field_id=1, name="id", field_type=IntegerType(), required=True),
	NestedField(field_id=2, name="name", field_type=StringType(), required=False),
)

# Create a new dataset (metadata written to a GCS path derived from the bucket property)
table = catalog.create_dataset(("example_collection", "users"), schema)

# Or register a table if you already have a metadata JSON in GCS
catalog.register_table(("example_namespace", "events"), "gs://my-bucket/path/to/events/metadata/00000001.json")

# Load a table
tbl = catalog.load_dataset(("example_namespace", "users"))
print(tbl.metadata)

Configuration and environment 🔧

  • GCP authentication: Use GOOGLE_APPLICATION_CREDENTIALS or Application Default Credentials
  • firestore_project and firestore_database can be supplied when creating the catalog
  • gcs_bucket is recommended to allow create_dataset to write metadata automatically; otherwise pass location explicitly to create_dataset
  • The catalog does not write Avro/manifest-list artifacts in the hot path; use the provided export/import utilities for interoperability

Example environment variables:

export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
export GOOGLE_CLOUD_PROJECT="my-gcp-project"

Interoperability

This catalog implementation does not write Avro manifest-list/Avro manifest files in the hot path. Instead, table metadata is stored in Firestore and the runtime writes a consolidated Parquet manifest for fast query planning.

If you need full Avro-compatible artifacts for other engines or tools, use the provided export/import utilities to transform between Avro manifests and the Parquet-first storage layout used by this catalog.

API overview 📚

The package exports a factory helper create_catalog and the FirestoreCatalog class.

Key methods include:

  • create_collection(collection, properties={}, exists_ok=False)
  • drop_namespace(namespace)
  • list_namespaces()
  • create_dataset(identifier, schema, location=None, partition_spec=None, sort_order=None, properties={})
  • register_table(identifier, metadata_location)
  • load_dataset(identifier)
  • list_datasets(namespace)
  • drop_dataset(identifier)
  • rename_table(from_identifier, to_identifier)
  • commit_table(table, requirements, updates)
  • create_view(identifier, sql, schema=None, author=None, description=None, properties={})
  • load_view(identifier)
  • list_views(namespace)
  • view_exists(identifier)
  • drop_view(identifier)
  • update_view_execution_metadata(identifier, row_count=None, execution_time=None)

Views 👁️

Views are SQL queries stored in the catalog that can be referenced like tables. Each view includes:

  • SQL statement: The query that defines the view
  • Schema: The expected result schema (optional but recommended)
  • Metadata: Author, description, creation/update timestamps
  • Execution history: Last run time, row count, execution time

Example usage:

from pyiceberg.schema import Schema, NestedField
from pyiceberg.types import IntegerType, StringType

# Create a schema for the view
schema = Schema(
    NestedField(field_id=1, name="user_id", field_type=IntegerType(), required=True),
    NestedField(field_id=2, name="username", field_type=StringType(), required=False),
)

# Create a view
view = catalog.create_view(
    identifier=("my_namespace", "active_users"),
    sql="SELECT user_id, username FROM users WHERE active = true",
    schema=schema,
    author="data_team",
    description="View of all active users in the system"
)

# Load a view
view = catalog.load_view(("my_namespace", "active_users"))
print(f"SQL: {view.sql}")
print(f"Schema: {view.metadata.schema}")

# Update execution metadata after running the view
catalog.update_view_execution_metadata(
    ("my_namespace", "active_users"),
    row_count=1250,
    execution_time=0.45
)

Notes about behavior:

  • create_dataset will try to infer a default GCS location using the provided gcs_bucket property if location is omitted.
  • register_table validates that the provided metadata_location points to an existing GCS blob.
  • Views are stored as Firestore documents with complete metadata including SQL, schema, authorship, and execution history.
  • Table transactions are intentionally unimplemented.

Development & Linting 🧪

This package includes a small Makefile target to run linting and formatting tools (ruff, isort, pycln).

Install dev tools and run linters with:

python -m pip install --upgrade pycln isort ruff
make lint

Running tests (if you add tests):

python -m pytest

Compaction 🔧

This catalog supports small file compaction to improve query performance. See COMPACTION.md for detailed design documentation.

Quick Start

from pyiceberg_firestore_gcs import create_catalog
from pyiceberg_firestore_gcs.compaction import compact_table, get_compaction_stats

catalog = create_catalog(...)

# Check if compaction is needed
table = catalog.load_dataset(("namespace", "dataset_name"))
stats = get_compaction_stats(table)
print(f"Small files: {stats['small_file_count']}")

# Run compaction
result = compact_table(catalog, ("namespace", "table_name"))
print(f"Compacted {result.files_rewritten} files")

Configuration

Control compaction behavior via table properties:

table = catalog.create_dataset(
    identifier=("namespace", "table_name"),
    schema=schema,
    properties={
        "compaction.enabled": "true",
        "compaction.min-file-count": "10",
        "compaction.max-small-file-size-bytes": "33554432",  # 32 MB
        "write.target-file-size-bytes": "134217728"  # 128 MB
    }
)

Limitations & KNOWN ISSUES ⚠️

  • No support for dataset-level transactions. create_dataset_transaction raises NotImplementedError.
  • The catalog stores metadata location references in Firestore; purging metadata files from GCS is not implemented.
  • This is an opinionated implementation intended for internal or controlled environments. Review for production constraints before use in multi-tenant environments.

Contributing 🤝

Contributions are welcome. Please follow these steps:

  1. Fork the repository and create a feature branch.
  2. Run and pass linting and tests locally.
  3. Submit a PR with a clear description of the change.

Please add unit tests and docs for new behaviors.


If you'd like, I can also add usage examples that show inserting rows using PyIceberg readers/writers, or add CI testing steps to the repository. ✅

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

opteryx_catalog-0.4.6.tar.gz (50.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

opteryx_catalog-0.4.6-py3-none-any.whl (53.0 kB view details)

Uploaded Python 3

File details

Details for the file opteryx_catalog-0.4.6.tar.gz.

File metadata

  • Download URL: opteryx_catalog-0.4.6.tar.gz
  • Upload date:
  • Size: 50.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for opteryx_catalog-0.4.6.tar.gz
Algorithm Hash digest
SHA256 1810aa9fb0cdc52d70ad3972fdb3e1b8eb4c99ed5aa0b23c99c3c152565ce507
MD5 392d43411de23714f9faefc579e2a164
BLAKE2b-256 b33176beaa70ac1f1d5acbc0a73c3738a71ff339f0c26a3f8ce4dcb43fdc1169

See more details on using hashes here.

Provenance

The following attestation bundles were made for opteryx_catalog-0.4.6.tar.gz:

Publisher: release.yaml on mabel-dev/pyiceberg-firestore-gcs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file opteryx_catalog-0.4.6-py3-none-any.whl.

File metadata

File hashes

Hashes for opteryx_catalog-0.4.6-py3-none-any.whl
Algorithm Hash digest
SHA256 7945afe0fd469c55e540d2f50491dcf71e6a07a19281ce9bf87921fb104d5d84
MD5 1494359bc4346aa2c0a618288a1bd26a
BLAKE2b-256 f928f683b2bac9c8a0ff0f35adadce628772a7a63bf7c3f18be93193a91c52ec

See more details on using hashes here.

Provenance

The following attestation bundles were made for opteryx_catalog-0.4.6-py3-none-any.whl:

Publisher: release.yaml on mabel-dev/pyiceberg-firestore-gcs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page