Skip to main content

Opteryx Cloud Catalog

Project description

pyiceberg-firestore-gcs

A Firestore + Google Cloud Storage (GCS) backed implementation of a lightweight catalog interface. This package provides an opinionated catalog implementation for storing table metadata documents in Firestore and consolidated Parquet manifests in GCS.

Important: This library is modelled after Apache Iceberg but is not compatible with Iceberg; it is a separate implementation with different storage conventions and metadata layout. This library is the catalog and metastore used by opteryx.app and uses Firestore as the primary metastore and GCS for data and manifest storage.


Features ✅

  • Firestore-backed catalog and collection storage
  • GCS-based table metadata storage; export/import utilities available for artifact conversion
  • Table creation, registration, listing, loading, renaming, and deletion
  • Commit operations that write updated metadata to GCS and persist references in Firestore
  • Simple, opinionated defaults (e.g., default GCS location derived from catalog properties)
  • Lightweight schema handling (supports pyarrow schemas)

Quick start 💡

  1. Ensure you have GCP credentials available to the environment. Typical approaches:

    • Set GOOGLE_APPLICATION_CREDENTIALS to a service account JSON key file, or
    • Use gcloud auth application-default login for local development.
  2. Install locally (or publish to your package repo):

python -m pip install -e .
  1. Create a FirestoreCatalog and use it in your application:
from pyiceberg_firestore_gcs import create_catalog
from pyiceberg.schema import Schema, NestedField
from pyiceberg.types import IntegerType, StringType

catalog = create_catalog(
	"my_catalog",
	firestore_project="my-gcp-project",
	gcs_bucket="my-default-bucket",
)

# Create a collection
catalog.create_collection("example_collection")

# Create a simple PyIceberg schema
schema = Schema(
	NestedField(field_id=1, name="id", field_type=IntegerType(), required=True),
	NestedField(field_id=2, name="name", field_type=StringType(), required=False),
)

# Create a new dataset (metadata written to a GCS path derived from the bucket property)
table = catalog.create_dataset(("example_collection", "users"), schema)

# Or register a table if you already have a metadata JSON in GCS
catalog.register_table(("example_namespace", "events"), "gs://my-bucket/path/to/events/metadata/00000001.json")

# Load a table
tbl = catalog.load_dataset(("example_namespace", "users"))
print(tbl.metadata)

Configuration and environment 🔧

  • GCP authentication: Use GOOGLE_APPLICATION_CREDENTIALS or Application Default Credentials
  • firestore_project and firestore_database can be supplied when creating the catalog
  • gcs_bucket is recommended to allow create_dataset to write metadata automatically; otherwise pass location explicitly to create_dataset
  • The catalog writes consolidated Parquet manifests and does not write manifest-list artifacts in the hot path. Use the provided export/import utilities for artifact conversion when necessary.

Example environment variables:

export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
export GOOGLE_CLOUD_PROJECT="my-gcp-project"

Manifest format

This catalog writes consolidated Parquet manifests for fast query planning and stores table metadata in Firestore. Manifests and data files are stored in GCS. If you need different artifact formats, use the provided export/import utilities to convert manifests outside the hot path.

API overview 📚

The package exports a factory helper create_catalog and the FirestoreCatalog class.

Key methods include:

  • create_collection(collection, properties={}, exists_ok=False)
  • drop_namespace(namespace)
  • list_namespaces()
  • create_dataset(identifier, schema, location=None, partition_spec=None, sort_order=None, properties={})
  • register_table(identifier, metadata_location)
  • load_dataset(identifier)
  • list_datasets(namespace)
  • drop_dataset(identifier)
  • rename_table(from_identifier, to_identifier)
  • commit_table(table, requirements, updates)
  • create_view(identifier, sql, schema=None, author=None, description=None, properties={})
  • load_view(identifier)
  • list_views(namespace)
  • view_exists(identifier)
  • drop_view(identifier)
  • update_view_execution_metadata(identifier, row_count=None, execution_time=None)

Views 👁️

Views are SQL queries stored in the catalog that can be referenced like tables. Each view includes:

  • SQL statement: The query that defines the view
  • Schema: The expected result schema (optional but recommended)
  • Metadata: Author, description, creation/update timestamps
  • Execution history: Last run time, row count, execution time

Example usage:

from pyiceberg.schema import Schema, NestedField
from pyiceberg.types import IntegerType, StringType

# Create a schema for the view
schema = Schema(
    NestedField(field_id=1, name="user_id", field_type=IntegerType(), required=True),
    NestedField(field_id=2, name="username", field_type=StringType(), required=False),
)

# Create a view
view = catalog.create_view(
    identifier=("my_namespace", "active_users"),
    sql="SELECT user_id, username FROM users WHERE active = true",
    schema=schema,
    author="data_team",
    description="View of all active users in the system"
)

# Load a view
view = catalog.load_view(("my_namespace", "active_users"))
print(f"SQL: {view.sql}")
print(f"Schema: {view.metadata.schema}")

# Update execution metadata after running the view
catalog.update_view_execution_metadata(
    ("my_namespace", "active_users"),
    row_count=1250,
    execution_time=0.45
)

Notes about behavior:

  • create_dataset will try to infer a default GCS location using the provided gcs_bucket property if location is omitted.
  • register_table validates that the provided metadata_location points to an existing GCS blob.
  • Views are stored as Firestore documents with complete metadata including SQL, schema, authorship, and execution history.
  • Table transactions are intentionally unimplemented.

Development & Linting 🧪

This package includes a small Makefile target to run linting and formatting tools (ruff, isort, pycln).

Install dev tools and run linters with:

python -m pip install --upgrade pycln isort ruff
make lint

Running tests (if you add tests):

python -m pytest

Compaction 🔧

This catalog supports small file compaction to improve query performance. See COMPACTION.md for detailed design documentation.

Quick Start

from pyiceberg_firestore_gcs import create_catalog
from pyiceberg_firestore_gcs.compaction import compact_table, get_compaction_stats

catalog = create_catalog(...)

# Check if compaction is needed
table = catalog.load_dataset(("namespace", "dataset_name"))
stats = get_compaction_stats(table)
print(f"Small files: {stats['small_file_count']}")

# Run compaction
result = compact_table(catalog, ("namespace", "table_name"))
print(f"Compacted {result.files_rewritten} files")

Configuration

Control compaction behavior via table properties:

table = catalog.create_dataset(
    identifier=("namespace", "table_name"),
    schema=schema,
    properties={
        "compaction.enabled": "true",
        "compaction.min-file-count": "10",
        "compaction.max-small-file-size-bytes": "33554432",  # 32 MB
        "write.target-file-size-bytes": "134217728"  # 128 MB
    }
)

Limitations & KNOWN ISSUES ⚠️

  • No support for dataset-level transactions. create_dataset_transaction raises NotImplementedError.
  • The catalog stores metadata location references in Firestore; purging metadata files from GCS is not implemented.
  • This is an opinionated implementation intended for internal or controlled environments. Review for production constraints before use in multi-tenant environments.

Contributing 🤝

Contributions are welcome. Please follow these steps:

  1. Fork the repository and create a feature branch.
  2. Run and pass linting and tests locally.
  3. Submit a PR with a clear description of the change.

Please add unit tests and docs for new behaviors.


If you'd like, I can also add usage examples that show inserting rows using PyIceberg readers/writers, or add CI testing steps to the repository. ✅

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

opteryx_catalog-0.4.41.tar.gz (109.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

opteryx_catalog-0.4.41-py3-none-any.whl (129.9 kB view details)

Uploaded Python 3

File details

Details for the file opteryx_catalog-0.4.41.tar.gz.

File metadata

  • Download URL: opteryx_catalog-0.4.41.tar.gz
  • Upload date:
  • Size: 109.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for opteryx_catalog-0.4.41.tar.gz
Algorithm Hash digest
SHA256 a0a732007011e938db0302230ae0927cae4c5af40db281be9a9d2fbc5258899c
MD5 beb39f44dee17066b5200f4a02816d93
BLAKE2b-256 827334ea8411dcfcb73d2fb3626e03be387c556abf9ba8adb1c68959c036efea

See more details on using hashes here.

Provenance

The following attestation bundles were made for opteryx_catalog-0.4.41.tar.gz:

Publisher: release.yaml on mabel-dev/opteryx-catalog

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file opteryx_catalog-0.4.41-py3-none-any.whl.

File metadata

File hashes

Hashes for opteryx_catalog-0.4.41-py3-none-any.whl
Algorithm Hash digest
SHA256 7cdc2a879c9e3ea0ea9262264cf54d8c8b289add55e23d46ad7015aa11b042c3
MD5 7b4c774bf7eb12e1615be2a110b4e558
BLAKE2b-256 3b32d2f29c10839bce788e81aab932b409405f0cfa1620ce2fc1f564b5da8ca8

See more details on using hashes here.

Provenance

The following attestation bundles were made for opteryx_catalog-0.4.41-py3-none-any.whl:

Publisher: release.yaml on mabel-dev/opteryx-catalog

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page