Skip to main content

poor man´s data lake

Project description

PyDala2

PyDala2

PyPI version License: MIT Documentation

Overview 📖

PyDala2 is a high-performance Python library for managing Parquet datasets with advanced metadata capabilities. Built on Apache Arrow, it provides efficient management of Parquet datasets with features including:

  • Smart dataset management with metadata optimization
  • Multi-format support (Parquet, CSV, JSON)
  • Multi-backend integration (Polars, PyArrow, DuckDB, Pandas)
  • Advanced querying with predicate pushdown
  • Schema management with automatic validation
  • Performance optimization with caching and partitioning
  • Catalog system for centralized dataset management

✨ Key Features

  • 🚀 High Performance: Built on Apache Arrow with optimized memory usage and processing speed
  • 📊 Smart Dataset Management: Efficient Parquet handling with metadata optimization and caching
  • 🔄 Multi-backend Support: Seamlessly switch between Polars, PyArrow, DuckDB, and Pandas
  • 🔍 Advanced Querying: SQL-like filtering with predicate pushdown for maximum efficiency
  • 📋 Schema Management: Automatic validation, evolution, and tracking of data schemas
  • ⚡ Performance Optimization: Built-in caching, compression, and intelligent partitioning
  • 🛡️ Type Safety: Comprehensive validation and error handling throughout the library
  • 🏗️ Catalog System: Centralized dataset management across namespaces

🚀 Quick Start

Installation

# Install PyDala2
pip install pydala2

# Install with all optional dependencies
pip install pydala2[all]

# Install with specific backends
pip install pydala2[polars,duckdb]

Basic Usage

from pydala import ParquetDataset
import pandas as pd

# Create a dataset
dataset = ParquetDataset("data/my_dataset")

# Write data
data = pd.DataFrame({
    'id': range(100),
    'category': ['A', 'B', 'C'] * 33 + ['A'],
    'value': [i * 2 for i in range(100)]
})
dataset.write_to_dataset(
    data=data,
    partition_cols=['category']
)

# Read with filtering - automatic backend selection
result = dataset.filter("category IN ('A', 'B') AND value > 50")

# Export to different formats
df_polars = result.table.to_polars()  # or use shortcut: result.t.pl
df_pandas = result.table.df           # or result.t.df
duckdb_rel = result.table.ddb         # or result.t.ddb

Using Different Backends

# PyDala2 provides automatic backend selection
# Just access data in your preferred format:

# Polars LazyFrame (recommended for performance)
lazy_df = dataset.table.pl  # or dataset.t.pl
result = (
    lazy_df
    .filter(pl.col("value") > 100)
    .group_by("category")
    .agg(pl.mean("value"))
    .collect()
)

# DuckDB (for SQL queries)
result = dataset.ddb_con.sql("""
    SELECT category, AVG(value) as avg_value
    FROM dataset
    GROUP BY category
""").to_arrow()

# PyArrow Table (for columnar operations)
table = dataset.table.arrow  # or dataset.t.arrow

# Pandas DataFrame (for compatibility)
df_pandas = dataset.table.df  # or dataset.t.df

# Direct export methods
df_polars = dataset.table.to_polars(lazy=False)
table = dataset.table.to_arrow()
df_pandas = dataset.table.to_pandas()

Catalog Management

from pydala import Catalog

# Create catalog from YAML configuration
catalog = Catalog("catalog.yaml")

# YAML configuration example:
# tables:
#   sales_2023:
#     path: "/data/sales/2023"
#     filesystem: "local"
#   customers:
#     path: "/data/customers"
#     filesystem: "local"

# Query across datasets using automatic table loading
result = catalog.query("""
    SELECT
        s.*,
        c.customer_name,
        c.segment
    FROM sales_2023 s
    JOIN customers c ON s.customer_id = c.id
    WHERE s.date >= '2023-01-01'
""")

# Or access datasets directly
sales_dataset = catalog.get_dataset("sales_2023")
filtered_sales = sales_dataset.filter("amount > 1000")

📚 Documentation

Comprehensive documentation is available at pydala2.readthedocs.io:

Getting Started

User Guide

API Reference

Advanced Topics

🤝 Contributing

Contributions are welcome! Please see our Contributing Guide for details.

📝 License

MIT License

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydala2-0.22.4.tar.gz (412.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pydala2-0.22.4-py3-none-any.whl (69.4 kB view details)

Uploaded Python 3

File details

Details for the file pydala2-0.22.4.tar.gz.

File metadata

  • Download URL: pydala2-0.22.4.tar.gz
  • Upload date:
  • Size: 412.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.9

File hashes

Hashes for pydala2-0.22.4.tar.gz
Algorithm Hash digest
SHA256 8df8c8c89ea297a4345d3914bc7b8c0f82f73aa1d2c0bf979934cad0888ee745
MD5 b512ca895ac78e0c01e79ba9581a861f
BLAKE2b-256 98e2586f1059056232c6ba100789b87be833f69ea119e2c70e2733b4013435ef

See more details on using hashes here.

File details

Details for the file pydala2-0.22.4-py3-none-any.whl.

File metadata

  • Download URL: pydala2-0.22.4-py3-none-any.whl
  • Upload date:
  • Size: 69.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.9

File hashes

Hashes for pydala2-0.22.4-py3-none-any.whl
Algorithm Hash digest
SHA256 17797281928a52f12f92d50ba6ce141b97aa39ef949af6341b64440fd9cafd33
MD5 89120cb949cf374d4eb5a4fba2aa3812
BLAKE2b-256 77873d99fae88c66e63a22eb800f51a8bd88b73daf5c64769b6037fc5519fa4c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page