Skip to main content

Simple Apache Drill alternative using PySpark

Project description

Simple Apache Drill alternative using PySpark

Setup

Run terminal command pip install microdrill

Dependencies

PySpark was tested with Spark 1.6

Usage

Defining Query Parquet Table

ParquetTable(table_name, schema_index_file=file_name)

  • table_name: Table referenced name.

  • file_name: File name to search for table schema.

Using Parquet DAL

ParquetDAL(file_uri, sc)

Connecting in tables

parquet_conn = ParquetDAL(file_uri, sc)
parquet_table = ParquetTable(table_name, schema_index_file=file_name)
parquet_conn.set_table(table_name, parquet_table)

Queries

Returning Table Object

parquet_conn(table_name)

Returning Field Object

parquet_conn(table_name)(field_name)

Basic Query

parquet_conn.select(field_object, [field_object2, ...]).where(field_object=value)
parquet_conn.select(field_object1, field_object2).where(field_object1==value1 & ~field_object2==value2)
parquet_conn.select(field_object1, field_object2).where(field_object1!=value1 | field_object1.regexp(reg_exp))

Grouping By

parquet_conn.groupby(field_object1, [field_object2, ...])

Ordering By

parquet_conn.orderby(field_object1, [field_object2, ...])
parquet_conn.orderby(~field_object)

Limiting

parquet_conn.limit(number)

Executing

df = parquet_conn.execute() execute() returns a PySpark DataFrame.

Returning Field Names From Schema

parquet_conn(table_name).schema()

Developers

Install latest jdk and run in terminal make setup

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

microdrill-0.0.2.tar.gz (7.9 kB view hashes)

Uploaded source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page