Skip to main content
Join the official Python Developers Survey 2018 and win valuable prizes: Start the survey!

A simple Parquet converter for JSON/python data

Project description

This library wraps pyarrow to provide some tools to easily convert JSON data into Parquet format. It is mostly in Python. It iterates over files. It copies the data several times in memory. It is not meant to be the fastest thing available. However, it is convenient for smaller data sets, or people who don’t have a huge issue with speed.

Installation

pip install json2parquet

Usage

Here’s how to load a random JSON dataset.

from json2parquet import convert_json

# Infer Schema (requires reading dataset for column names)
convert_json(input_filename, output_filename)

# Given columns
convert_json(input_filename, output_filename, ["my_column", "my_int"])

# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
    pa.field('my_column', pa.string),
    pa.field('my_int', pa.int64),
])
convert_json(input_filename, output_filename, schema)

You can also work with Python data structures directly

from json2parquet import load_json, ingest_data, write_parquet, write_parquet_dataset

# Loading JSON to a PyArrow RecordBatch (schema is optional as above)
load_json(input_filename, schema)

# Working with a list of dictionaries
ingest_data(input_data, schema)

# Writing Parquet Files from PyArrow Record Batches
write_parquet(data, destination)

# You can also pass any keyword arguments that PyArrow accepts
write_parquet(data, destination, compression='snappy')

# You can also write partitioned date
write_parquet_dataset(data, destination_dir, partition_cols=["foo", "bar", "baz"])

If you know your schema, you can specify custom datetime formats (only one for now). This formatting will be ignored if you don’t pass a PyArrow schema.

from json2parquet import convert_json

# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
    pa.field('my_column', pa.string),
    pa.field('my_int', pa.int64),
])
date_format = "%Y-%m-%dT%H:%M:%S.%fZ"
convert_json(input_filename, output_filename, schema, date_format=date_format)

Although json2parquet can infer schemas, it has helpers to pull in external ones as well

from json2parquet import load_json
from json2parquet.helpers import get_schema_from_redshift

# Fetch the schema from Redshift (requires psycopg2)
schema = get_schema_from_redshift(redshift_schema, redshift_table, redshift_uri)

# Load JSON with the Redshift schema
load_json(input_filename, schema)

Operational Notes

If you are using this library to convert JSON data to be read by Spark, Athena, Spectrum or Presto make sure you use use_deprecated_int96_timestamps when writing your Parquet files, otherwise you will see some really screwy dates.

Contributing

Code Changes

  • Clone a fork of the library
  • Run make setup
  • Run make test
  • Apply your changes (don’t bump version)
  • Add tests if needed
  • Run make test to ensure nothing broke
  • Submit PR

Documentation Changes

It is always a struggle to keep documentation correct and up to date. Any fixes are welcome. If you don’t want to clone the repo to work locally, please feel free to edit using Github and to submit Pull Requests via Github’s built in features.

Changelog

0.0.23

  • Bump pyarrow, numpy and Pandas versions

0.0.22

  • Bump pyarrow and Pandas versions

0.0.21

  • Don’t lock ciso8601 version.

0.0.20

  • Add support for DATE fields. h/t to Spectrify for the implementation

0.0.19

  • Properly handle boolean columns with None.

0.0.18

  • Allow schema to be an optional argument to convert_json

0.0.17

  • Bring write_parquet_dataset to a top level import

0.0.16

  • Properly convert Boolean fields passed as numbers to PyArrow booleans.

0.0.15

  • Add support for custom datetime formatting (thanks @Madhu1512)
  • Add support for writing partitioned datasets (thanks @mthota15)

0.0.14

  • Stop silencing Redshift errors.

0.0.13

  • Fix decimal type for newer pyarrow versions

0.0.12

  • Allow casting of int64 -> int32

0.0.11

  • Bump PyArrow and allow int32 data

0.0.10

  • Allow passing partition columns when getting a Redshift schema, so they can be skipped

0.0.9

  • Fix conversion of timestamp columns again

0.0.8

  • Fix conversion of timestamp columns

0.0.7

  • Force converted Timestamps to max out at pandas.Timestamp.max if they exceed the resolution of datetime[ns]

0.0.6

  • Add automatic downcasting for Python float to float32 via pandas when schema specifies pa.float32()

0.0.5

  • Fix conversion of float types to be size specific

0.0.4

  • Fix ingestion of timestamp data with ns resolution

0.0.3

  • Add pandas dependency
  • Add proper ingestion of timestamp data using Pandas to_datetime

0.0.2

  • Fix formatting of README so it displays on PyPI

0.0.1

  • Initial release
  • JSON/data writing support
  • Redshift Schema reading support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
json2parquet-0.0.23-py2-none-any.whl (10.1 kB) Copy SHA256 hash SHA256 Wheel 2.7 Aug 13, 2018
json2parquet-0.0.23-py3-none-any.whl (10.1 kB) Copy SHA256 hash SHA256 Wheel 3.6 Aug 13, 2018
json2parquet-0.0.23.tar.gz (7.1 kB) Copy SHA256 hash SHA256 Source None Aug 13, 2018

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page