Skip to main content
Python Software Foundation 20th Year Anniversary Fundraiser  Donate today!

Intake parquet plugin

Project description

# Intake-parquet

[![Build Status](]( [![Documentation Status](](

[Intake data loader]( interface to the parquet binary tabular data format.

Parquet is very popular in the big-data ecosystem, because it provides columnar and chunk-wise access to the data, with efficient encodings and compression. This makes the format particularly effective for streaming through large subsections of even larger data-sets, hence it’s common use with Hadoop and Spark.

Parquet data may be single files, directories of files, or nested directories, where the directory names are meaningful in the partitioning of the data.

### Features

The parquet plugin allows for:

  • efficient metadata parsing, so you know the data types and number of records without loading any data
  • random access of partitions
  • column and index selection, load only the data you need
  • passing of value-based filters, that you only load those partitions containing some valid data (NB: does not filter the values within a partition)

### Installation

The conda install instructions are:

` conda install -c conda-forge intake-parquet `

### Examples

See the notebook in the examples/ directory.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for intake-parquet, version 0.2.3
Filename, size File type Python version Upload date Hashes
Filename, size intake-parquet-0.2.3.tar.gz (119.7 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring DigiCert DigiCert EV certificate Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page