Skip to main content

No project description provided

Project description

Master codecov

data-tools(et)

data-toolset is designed to simplify your data processing tasks by providing a more user-friendly alternative to the traditional JAR utilities like avro-tools and parquet-tools. With this Python package, you can effortlessly handle various data file formats, including Avro and Parquet, using a simple and intuitive command-line interface.

Installation

Python 3.8, Python 3.9 and 3.10 are supported and tested (to some extent).

python -m pip install --user data-toolset

Usage

$ data-toolset -h
usage: data-toolset [-h] {head,tail,meta,schema,stats,query,validate,merge,count,to_json,to_csv,to_avro,to_parquet,random_sample} ...

positional arguments:
  {head,tail,meta,schema,stats,query,validate,merge,count,to_json,to_csv,to_avro,to_parquet,random_sample}
                        commands
    head                Print the first N records from a file
    tail                Print the last N records from a file
    meta                Print a file's metadata
    schema              Print the Avro schema for a file
    stats               Print statistics about a file
    query               Query a file
    validate            Validate a file
    merge               Merge multiple files into one
    count               Count the number of records in a file
    to_json             Convert a file to JSON format
    to_csv              Convert a file to CSV format
    to_avro             Convert a file to Avro format
    to_parquet          Convert a file to Parquet format
    random_sample       Randomly sample records from a file

Examples

Print the first 10 records of a Parquet file:

$ data-toolset head my_data.parquet -n 10
shape: (1, 7)
┌───────────┬─────┬──────────┬────────┬──────────────────────────┬────────────────────────────┬──────────────────┐
│ character  age  is_human  height  quote                     friends                     appearance       │
│ ---        ---  ---       ---     ---                       ---                         ---              │
│ str        i64  bool      f64     str                       list[str]                   struct[2]        │
╞═══════════╪═════╪══════════╪════════╪══════════════════════════╪════════════════════════════╪══════════════════╡
│ Alice      10   true      150.5   Curiouser and curiouser!  ["Rabbit", "Cheshire Cat"]  {"blue","small"} │
└───────────┴─────┴──────────┴────────┴──────────────────────────┴────────────────────────────┴──────────────────┘

Query a Parquet file using a SQL-like expression:

$ data-toolset query my_data.parquet "SELECT * FROM 'my_data.parquet' WHERE height > 165"
shape: (2, 7)
┌─────────────────┬─────┬──────────┬────────┬───────────────────────┬────────────────────────────────────┬───────────────────┐
│ character        age  is_human  height  quote                  friends                             appearance        │
│ ---              ---  ---       ---     ---                    ---                                 ---               │
│ str              i64  bool      f64     str                    list[str]                           struct[2]         │
╞═════════════════╪═════╪══════════╪════════╪═══════════════════════╪════════════════════════════════════╪═══════════════════╡
│ Mad Hatter       35   true      175.2   I'm late!              ["Alice"]                           {"green","tall"}  │
│ Queen of Hearts  50   false     165.8   Off with their heads!  ["White Rabbit", "King of Hearts"]  {"red","average"} │
└─────────────────┴─────┴──────────┴────────┴───────────────────────┴────────────────────────────────────┴───────────────────┘

Merge multiple Avro files into one:

data-toolset merge file1.avro file2.avro file3.avro merged_file.avro

Convert Avro file into Parquet:

data-toolset to_parquet my_data.avro output.parquet

Convert Parquet file into JSON:

data-toolset to_json my_data.parquet output.json

Contributing

Contributions are welcome! If you have any suggestions, bug reports, or feature requests, please open an issue on GitHub.

TODO

  • optimizations [TBD]
  • benchmarking

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

data_toolset-0.1.6.tar.gz (10.8 kB view details)

Uploaded Source

Built Distribution

data_toolset-0.1.6-py3-none-any.whl (12.4 kB view details)

Uploaded Python 3

File details

Details for the file data_toolset-0.1.6.tar.gz.

File metadata

  • Download URL: data_toolset-0.1.6.tar.gz
  • Upload date:
  • Size: 10.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.0 CPython/3.9.16 Darwin/22.2.0

File hashes

Hashes for data_toolset-0.1.6.tar.gz
Algorithm Hash digest
SHA256 b0ed8c45e18419a14b8f1a8ea31d0a7198458db1e810cb78641f26658fc9163d
MD5 3099a968778942a19c7af7040e746548
BLAKE2b-256 2df32cd6d059e7f3ece77c7ddd023ea47ea911ae6853bae2ca28342a2ae233b4

See more details on using hashes here.

File details

Details for the file data_toolset-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: data_toolset-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 12.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.0 CPython/3.9.16 Darwin/22.2.0

File hashes

Hashes for data_toolset-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 83b7432f12cf40f4746258ae06ad3e6a515e425316a0725a101f7a2ff315cf68
MD5 3621f029e61acd5d6035110ccca42f60
BLAKE2b-256 d18494586245ab9b448a4249331227c279c2da828888f99605fae1de81061b5e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page