Skip to main content

Code generation library for creating Python UDTFs from CDF Data Models

Project description

pygen-spark

A code generation library that extends pygen to generate Python User-Defined Table Functions (UDTFs) for CDF Data Models, enabling you to query CDF data directly from Spark SQL.

Latest Release: Version 0.2.0 includes improved error handling, direct REST API calls, and enhanced time series UDTF support. Version 0.2.1 fixes protobuf parsing for detailed time series UDTFs. Version 0.2.2 adds SQL-native time series UDTF template with predicate pushdown support. Version 0.2.3 improves handling of CDF view properties whose names clash with Python reserved words.

Full release notes are published on GitHub Releases.

Note: This document uses PyPI package names for references:

  • PyPI: cognite-pygen (repository: pygen)
  • PyPI: cognite-pygen-spark (repository: pygen-spark)
  • Import paths: cognite.pygen, cognite.pygen_spark

Overview

cognite.pygen_spark (PyPI: cognite-pygen-spark) is a generic Spark UDTF code generation library that works with any Spark cluster (standalone, YARN, Kubernetes, or local development). It generates strongly-typed Python UDTF functions from CDF Data Models using Jinja2 templates, allowing you to query CDF data directly from Spark SQL.

Package Purpose:

  • Generic Spark Support: Works with any Spark cluster, not limited to Databricks
  • Template-Based Generation: Uses Jinja2 templates to generate UDTF code for both Data Model UDTFs and Time Series UDTFs
  • Type Conversion Utilities: Provides TypeConverter class for converting between CDF types, PySpark DataTypes, and SQL DDL
  • Connection Configuration: Provides CDFConnectionConfig Pydantic model for managing CDF credentials from TOML/YAML files
  • Utility Functions: Helper functions for consistent UDTF naming and other generic Spark utilities

Features

  • UDTF Generation: Automatically generates Python UDTF functions for each View in a CDF Data Model
  • Time Series UDTFs: Template-generated UDTFs for querying CDF time series datapoints (single, multiple, latest) using the same template-based generation as Data Model UDTFs
  • Type Safety: Leverages pygen's internal representation for strongly-typed code generation
  • Predicate Pushdown: Generated UDTFs support filter translation from Spark SQL to CDF API filters
  • Configuration File Support: Uses TOML/YAML configuration files for secure credential management
  • Generic Spark Support: Works with any Spark cluster, not limited to Databricks
  • Type Conversion Utilities: TypeConverter class for converting between CDF types, PySpark DataTypes, and SQL DDL
  • Connection Configuration: CDFConnectionConfig Pydantic model for managing CDF credentials
  • Utility Functions: Helper functions for consistent UDTF naming and other utilities

Using Generic Spark Utilities

pygen-spark provides generic utilities that work with any Spark cluster:

from cognite.pygen_spark import TypeConverter, CDFConnectionConfig, to_udtf_function_name

# Type conversion utilities
from cognite.client import data_modeling as dm
from pyspark.sql.types import StringType

# Convert CDF property type to PySpark DataType
spark_type = TypeConverter.cdf_to_spark(dm.Text(), is_array=False)
# Returns: StringType()

# Convert PySpark DataType to SQL DDL
sql_ddl = TypeConverter.spark_to_sql_ddl(spark_type)
# Returns: "STRING"

# Connection configuration from TOML
config = CDFConnectionConfig.from_toml("config.toml")
client = config.create_client()

# Convert view external_id to UDTF function name
udtf_name = to_udtf_function_name("MyView")
# Returns: "my_view_udtf"

These utilities are generic and work with any Spark cluster, not just Databricks.

Installation

pip install cognite-pygen-spark

Quick Start

from pathlib import Path
from cognite.client.data_classes.data_modeling.ids import DataModelId
from cognite.pygen import load_cognite_client_from_toml
from cognite.pygen_spark import SparkUDTFGenerator

# Load client from TOML file
client = load_cognite_client_from_toml("config.toml")

# Create generator
generator = SparkUDTFGenerator(
    client=client,
    output_dir=Path("./generated_udtfs"),
    data_model=DataModelId(space="sailboat", external_id="sailboat", version="1"),
    top_level_package="cognite_udtfs",
)

# Generate UDTFs for a Data Model
result = generator.generate_udtfs()

print(f"Generated {result.total_count} UDTF(s)")
for view_id, file_path in result.generated_files.items():
    print(f"  - {view_id}: {file_path}")

# Generate time series UDTFs (template-generated, same as data model UDTFs)
ts_result = generator.generate_time_series_udtfs()
print(f"Generated {ts_result.total_count} time series UDTF(s)")
for udtf_name, file_path in ts_result.generated_files.items():
    print(f"  - {udtf_name}: {file_path}")

See the User Guide for complete documentation on generating, registering, and querying UDTFs.

Architecture

cognite.pygen_spark extends cognite.pygen's architecture:

  • Reuses pygen's View parsing: Leverages pygen's internal representation of CDF Data Models
  • Custom template engine: Uses Jinja2 templates to generate UDTF Python code and SQL Views
  • Extends MultiAPIGenerator: Builds on pygen's code generation infrastructure
  • Consistent template-based generation: Both Data Model UDTFs and Time Series UDTFs use the same Jinja2 template-based generation approach for consistent behavior, error handling, and initialization patterns

See the Technical Plan for detailed architecture documentation.

Requirements

  • Python 3.9+
  • PySpark 3.5+ (required for UDTF support)
  • cognite-pygen (PyPI package name; import: cognite.pygen)
  • cognite-sdk-python (must be installed on all Spark worker nodes)
  • Spark cluster (standalone, YARN, Kubernetes, or local)

Package Structure

pygen-spark/
├── cognite/
│   └── pygen_spark/
│       ├── __init__.py
│       ├── generator.py          # SparkUDTFGenerator
│       ├── udtf_generator.py    # SparkMultiAPIGenerator
│       └── templates/
│           ├── udtf_function.py.jinja
│           ├── view_sql.py.jinja
│           └── udtf_init.py.jinja
├── pyproject.toml
└── README.md

Development

Setup

git clone <repository-url>
cd pygen-spark
pip install -e ".[dev]"

Running Tests

pytest tests/

Spark Cluster Compatibility

This package generates UDTF code that works with any Spark cluster:

  • Code Generation: Works on all Spark versions ✅
  • UDTF Templates: Compatible with PySpark 3.5+ ✅
  • Dependency Management: Requires cognite-sdk on all Spark worker nodes ⚠️

For standalone Spark clusters, ensure cognite-sdk is installed on all worker nodes. See the Installation Guide for details.

Related Packages

Documentation

User Guide

Examples

Technical Documentation

License

[License information]

Contributing

[Contributing guidelines]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cognite_pygen_spark-0.2.4.tar.gz (53.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cognite_pygen_spark-0.2.4-py3-none-any.whl (69.7 kB view details)

Uploaded Python 3

File details

Details for the file cognite_pygen_spark-0.2.4.tar.gz.

File metadata

  • Download URL: cognite_pygen_spark-0.2.4.tar.gz
  • Upload date:
  • Size: 53.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for cognite_pygen_spark-0.2.4.tar.gz
Algorithm Hash digest
SHA256 af86a072989c714945e9d276c502c923f8f7b4b7bfb2bc114823e91b48f1a4b1
MD5 df1eea4baa4923d65a5949e54c625736
BLAKE2b-256 06c830e99fef919c6683d83d0682c200885ce64fb6beec1282fe6df89693b506

See more details on using hashes here.

File details

Details for the file cognite_pygen_spark-0.2.4-py3-none-any.whl.

File metadata

File hashes

Hashes for cognite_pygen_spark-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 89b01494fddb1474689617a193f1fb066a562693f961e16dd9a5f4366b1f0528
MD5 c17ce84d2b00cb37b0c9de85341e5031
BLAKE2b-256 2bf7fb4d149292827cdc5d160c2c683ccdf716ffa66c292f0b4e9b116c94a199

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page