Skip to main content

poor man´s data lake

Project description

PyDala2

PyDala2

PyPI version License: MIT Ask DeepWiki

Overview 📖

Pydala is a high-performance Python library for managing Parquet datasets with powerful metadata capabilities. Built on Apache Arrow, it provides an efficient, user-friendly interface for handling large-scale data operations.

✨ Key Features

  • 📦 Smart Dataset Management: Efficient Parquet handling with metadata optimization
  • 🔄 Robust Caching: Built-in support for faster data access
  • 🔌 Seamless Integration: Works with Polars, PyArrow, and DuckDB
  • 🔍 Advanced Querying: SQL-like filtering with predicate pushdown
  • 🛠️ Schema Management: Automatic validation and tracking

🚀 Quick Start

Installation

pip install pydala2

📊 Creating a Dataset

from pydala.dataset import ParquetDataset

dataset = ParquetDataset(
    path="path/to/dataset",
    partitioning="hive",         # Hive-style partitioning
    timestamp_column="timestamp", # For time-based operations
    cached=True                  # Enable performance caching
)

💾 Writing Data

import polars as pl

# Create sample time-series data
df = pl.DataFrame({
    "timestamp": pl.date_range(0, 1000, "1d"),
    "value": range(1000)
})

# Write with smart partitioning and compression
dataset.write_to_dataset(
    data=df,                    # Can be a polars or pandas DataFrame or an Arrow Table, Dataset, or RecordBatch or a duckdb result 
    mode="overwrite",           # Options: "overwrite", "append", "delta"
    row_group_size=250_000,     # Optimize chunk size
    compression="zstd",         # High-performance compression
    partition_by=["year", "month"], # Auto-partition by time
    unique=True                 # Ensure data uniqueness
)

📥 Reading & Converting Data

dataset.load(update_metadata=True)

# Flexible data format conversion
pt = dataset.t                  # PyDala Table
df_polars = pt.to_polars()      # Convert to Polars
df_pandas = pt.to_pandas()      # Convert to Pandas
df_arrow = pt.to_arrow()        # Convert to Arrow
rel_ddb = pt.to_ddb()           # Convert DuckDB relation

# and many more... 

🔍 Smart Querying

# Efficient filtered reads with predicate pushdown
pt_filtered = dataset.filter("timestamp > '2023-01-01'")

# Chaining operations
df_filtered = (
    dataset
    .filter("column_name > 100")
    .pl.with_columns(
        pl.col("column_name").str.slice(0, 5).alias("new_column_name")
        )
    .to_pandas()
    )

# Fast metadata-only scans
pt_scanned = dataset.scan("column_name > 100")

# Access matching files
matching_files = ds.scan_files

🔄 Metadata Management

# Incremental metadata update
dataset.load(update_metadata=True)   # Update for new files

# Full metadata refresh
dataset.load(reload_metadata=True)   # Reload all metadata

# Repair schema/metadata
dataset.repair_schema()

⚡ Performance Optimization Tools

# Optimize storage types
dataset.opt_dtypes()              # Automatic type optimization

# Smart file management
dataset.compact_by_rows(max_rows=100_000)  # Combine small files
dataset.repartition(partitioning_columns=["date"])  # Optimize partitions
dataset.compact_by_timeperiod(interval="1d")  # Time-based optimization
dataset.compact_partitions()  # Partition structure optimization

⚠️ Important Notes

Type optimization involves full dataset rewrite Choose compaction strategy based on your access patterns Regular metadata updates ensure optimal query performance

📚 Documentation

There is a comprehensive tutorial available to help you get started with PyDala2, covering all features and functionalities in detail.

Note: This is generated with Code2Tutorial.

🤝 Contributing

Contributions welcome! See our contribution guidelines.

📝 License

MIT License

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydala2-0.9.9.tar.gz (245.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pydala2-0.9.9-py3-none-any.whl (57.7 kB view details)

Uploaded Python 3

File details

Details for the file pydala2-0.9.9.tar.gz.

File metadata

  • Download URL: pydala2-0.9.9.tar.gz
  • Upload date:
  • Size: 245.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.9

File hashes

Hashes for pydala2-0.9.9.tar.gz
Algorithm Hash digest
SHA256 7e195c91b8254774a70fbea961e984600ce932330eb38802f8b9c7d9255dbad1
MD5 aa5bd4151cbd5d4ec2f8946b72b5ff34
BLAKE2b-256 cb71bda5a7e2075fb5be45a00f754ed3107e878fa65a17b5e8a25a30ccf83c9e

See more details on using hashes here.

File details

Details for the file pydala2-0.9.9-py3-none-any.whl.

File metadata

  • Download URL: pydala2-0.9.9-py3-none-any.whl
  • Upload date:
  • Size: 57.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.9

File hashes

Hashes for pydala2-0.9.9-py3-none-any.whl
Algorithm Hash digest
SHA256 3ef2a4b4ce8a2c36ce6145861733cc5346fd16bbc99e3b4e9a2f8bb1e6953a0e
MD5 6836420097dfe846ff167a254dd029cd
BLAKE2b-256 00d83a2223e3bbe13fd58e933aa7a228638e589b5d3da2bc869c7f8a0e39a83b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page