Skip to main content

'A simple Parquet converter for JSON/python data'

Project description

This library wraps pyarrow to provide some tools to easily convert JSON data into Parquet format. It is mostly in Python. It iterates over files. It copies the data several times in memory. It is not meant to be the fastest thing available. However, it is convenient for smaller data sets, or people who don’t have a huge issue with speed.

Installation

With pip:

pip install json2parquet

With conda:

conda install -c conda-forge json2parquet

Usage

Here’s how to load a random JSON dataset.

from json2parquet import convert_json

# Infer Schema (requires reading dataset for column names)
convert_json(input_filename, output_filename)

# Given columns
convert_json(input_filename, output_filename, ["my_column", "my_int"])

# Given columns and custom field names
field_aliases = {'my_column': 'my_updated_column_name', "my_int": "my_integer"}
convert_json(input_filename, output_filename, ["my_column", "my_int"], field_aliases=field_aliases)


# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
    pa.field('my_column', pa.string),
    pa.field('my_int', pa.int64),
])
convert_json(input_filename, output_filename, schema)

You can also work with Python data structures directly

from json2parquet import load_json, ingest_data, write_parquet, write_parquet_dataset

# Loading JSON to a PyArrow RecordBatch (schema is optional as above)
load_json(input_filename, schema)

# Working with a list of dictionaries
ingest_data(input_data, schema)

# Working with a list of dictionaries and custom field names
field_aliases = {'my_column': 'my_updated_column_name', "my_int": "my_integer"}
ingest_data(input_data, schema, field_aliases)

# Writing Parquet Files from PyArrow Record Batches
write_parquet(data, destination)

# You can also pass any keyword arguments that PyArrow accepts
write_parquet(data, destination, compression='snappy')

# You can also write partitioned date
write_parquet_dataset(data, destination_dir, partition_cols=["foo", "bar", "baz"])

If you know your schema, you can specify custom datetime formats (only one for now). This formatting will be ignored if you don’t pass a PyArrow schema.

from json2parquet import convert_json

# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
    pa.field('my_column', pa.string),
    pa.field('my_int', pa.int64),
])
date_format = "%Y-%m-%dT%H:%M:%S.%fZ"
convert_json(input_filename, output_filename, schema, date_format=date_format)

Although json2parquet can infer schemas, it has helpers to pull in external ones as well

from json2parquet import load_json
from json2parquet.helpers import get_schema_from_redshift

# Fetch the schema from Redshift (requires psycopg2)
schema = get_schema_from_redshift(redshift_schema, redshift_table, redshift_uri)

# Load JSON with the Redshift schema
load_json(input_filename, schema)

Operational Notes

If you are using this library to convert JSON data to be read by Spark, Athena, Spectrum or Presto make sure you use use_deprecated_int96_timestamps when writing your Parquet files, otherwise you will see some really screwy dates.

Contributing

Code Changes

  • Clone a fork of the library

  • Run make setup

  • Run make test

  • Apply your changes (don’t bump version)

  • Add tests if needed

  • Run make test to ensure nothing broke

  • Submit PR

Documentation Changes

It is always a struggle to keep documentation correct and up to date. Any fixes are welcome. If you don’t want to clone the repo to work locally, please feel free to edit using Github and to submit Pull Requests via Github’s built in features.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

json2parquet-2.2.0.tar.gz (10.5 kB view details)

Uploaded Source

Built Distribution

json2parquet-2.2.0-py3-none-any.whl (7.7 kB view details)

Uploaded Python 3

File details

Details for the file json2parquet-2.2.0.tar.gz.

File metadata

  • Download URL: json2parquet-2.2.0.tar.gz
  • Upload date:
  • Size: 10.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.3 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.16

File hashes

Hashes for json2parquet-2.2.0.tar.gz
Algorithm Hash digest
SHA256 b40b2d6e2d98c6fe01a5b35e1a0d6685e24200b237c7e69ea64c00a36f555e59
MD5 e9376f12010e8dccd93a3ae26c7f04da
BLAKE2b-256 ad9af89cf9347e1c3bf3d93fc5a37495ddd5b7d0f04a916d59598b52bfed6044

See more details on using hashes here.

File details

Details for the file json2parquet-2.2.0-py3-none-any.whl.

File metadata

  • Download URL: json2parquet-2.2.0-py3-none-any.whl
  • Upload date:
  • Size: 7.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.3 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.16

File hashes

Hashes for json2parquet-2.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c0c2d458e15805e369445bfbec0461fc102380a5953f3c0c0ace87256710d6ce
MD5 1250442095f2a58c3177ba7b8e80a09a
BLAKE2b-256 1466c27e1c0db2299ab437284933ee63de4fb40c35dbb1b3a15028b8c4758351

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page