A simple Parquet converter for JSON/python data
Project description
Json2Parquet |Build Status|
===========================
This library wraps ``pyarrow`` to provide some tools to easily convert
JSON data into Parquet format. It is mostly in Python. It iterates over
files. It copies the data several times in memory. It is not meant to be
the fastest thing available. However, it is convenient for smaller data
sets, or people who don’t have a huge issue with speed.
Installation
~~~~~~~~~~~~
.. code:: bash
pip install json2parquet
Usage
~~~~~
Here’s how to load a random JSON dataset.
.. code:: python
from json2parquet import convert_json
# Infer Schema (requires reading dataset for column names)
convert_json(input_filename, output_filename)
# Given columns
convert_json(input_filename, output_filename, ["my_column", "my_int"])
# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
pa.field('my_column', pa.string),
pa.field('my_int', pa.int64),
])
convert_json(input_filename, output_filename, schema)
You can also work with Python data structures directly
.. code:: python
from json2parquet import load_json, ingest_data, write_parquet
# Loading JSON to a PyArrow RecordBatch (schema is optional as above)
load_json(input_filename, schema)
# Working with a list of dictionaries
ingest_data(input_data, schema)
# Writing Parquet Files from PyArrow Record Batches
write_parquet(data, destination)
# You can also pass any keyword arguments that PyArrow accepts
write_parquet(data, destination, compression='snappy')
Although ``json2parquet`` can infer schemas, it has helpers to pull in external ones as well
.. code:: python
from json2parquet import load_json
from json2parquet.helpers import get_schema_from_redshift
# Fetch the schema from Redshift (requires psycopg2)
schema = get_schema_from_redshift(redshift_schema, redshift_table, redshift_uri)
# Load JSON with the Redshift schema
load_json(input_filename, schema)
Contributing
~~~~~~~~~~~~
Code Changes
------------
- Clone a fork of the library
- Run ``make setup``
- Run ``make test``
- Apply your changes (don't bump version)
- Add tests if needed
- Run ``make test`` to ensure nothing broke
- Submit PR
Documentation Changes
---------------------
It is always a struggle to keep documentation correct and up to date. Any fixes are welcome. If you don't want to clone the repo to work locally, please feel free to edit using Github and to submit Pull Requests via Github's built in features.
.. |Build Status| image:: https://travis-ci.org/andrewgross/json2parquet.svg?branch=master
:target: https://travis-ci.org/andrewgross/json2parquet
Changelog
---------
0.0.1
~~~~~~
- Initial release
- JSON/data writing support
- Redshift Schema reading support
===========================
This library wraps ``pyarrow`` to provide some tools to easily convert
JSON data into Parquet format. It is mostly in Python. It iterates over
files. It copies the data several times in memory. It is not meant to be
the fastest thing available. However, it is convenient for smaller data
sets, or people who don’t have a huge issue with speed.
Installation
~~~~~~~~~~~~
.. code:: bash
pip install json2parquet
Usage
~~~~~
Here’s how to load a random JSON dataset.
.. code:: python
from json2parquet import convert_json
# Infer Schema (requires reading dataset for column names)
convert_json(input_filename, output_filename)
# Given columns
convert_json(input_filename, output_filename, ["my_column", "my_int"])
# Given PyArrow schema
import pyarrow as pa
schema = pa.schema([
pa.field('my_column', pa.string),
pa.field('my_int', pa.int64),
])
convert_json(input_filename, output_filename, schema)
You can also work with Python data structures directly
.. code:: python
from json2parquet import load_json, ingest_data, write_parquet
# Loading JSON to a PyArrow RecordBatch (schema is optional as above)
load_json(input_filename, schema)
# Working with a list of dictionaries
ingest_data(input_data, schema)
# Writing Parquet Files from PyArrow Record Batches
write_parquet(data, destination)
# You can also pass any keyword arguments that PyArrow accepts
write_parquet(data, destination, compression='snappy')
Although ``json2parquet`` can infer schemas, it has helpers to pull in external ones as well
.. code:: python
from json2parquet import load_json
from json2parquet.helpers import get_schema_from_redshift
# Fetch the schema from Redshift (requires psycopg2)
schema = get_schema_from_redshift(redshift_schema, redshift_table, redshift_uri)
# Load JSON with the Redshift schema
load_json(input_filename, schema)
Contributing
~~~~~~~~~~~~
Code Changes
------------
- Clone a fork of the library
- Run ``make setup``
- Run ``make test``
- Apply your changes (don't bump version)
- Add tests if needed
- Run ``make test`` to ensure nothing broke
- Submit PR
Documentation Changes
---------------------
It is always a struggle to keep documentation correct and up to date. Any fixes are welcome. If you don't want to clone the repo to work locally, please feel free to edit using Github and to submit Pull Requests via Github's built in features.
.. |Build Status| image:: https://travis-ci.org/andrewgross/json2parquet.svg?branch=master
:target: https://travis-ci.org/andrewgross/json2parquet
Changelog
---------
0.0.1
~~~~~~
- Initial release
- JSON/data writing support
- Redshift Schema reading support
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
json2parquet-0.0.1.tar.gz
(5.3 kB
view hashes)
Built Distribution
Close
Hashes for json2parquet-0.0.1-py2-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3627ac0a65aa9366d433a26ece683836a7b7cb4065c26f4fc77bb55a1a06a596 |
|
MD5 | 7df3956fa3d033b5aa3f83d4a8ece284 |
|
BLAKE2b-256 | f0463828145588f1cced766e9edcee7daf0a5e1d12ce6af5f03b6a408bd866fc |