Skip to main content

Iterable data processing Python library

Project description

Iterable Data

Work in progress. Documentation in progress

Iterable data is a Python lib to read data files row by row and write data files. Iterable classes are similar to files or csv.DictReader or reading parquet files row by row.

This library was written to simplify data processing and conversion between formats.

Supported file types:

  • BSON
  • JSON
  • NDJSON (JSON lines)
  • XML
  • XLS
  • XLSX
  • Parquet
  • ORC
  • Avro
  • Pickle

Supported file compression: GZip, BZip2, LZMA (.xz), LZ4, ZIP, Brotli, ZStandard

Why writing this lib?

Python has many high-quality data processing tools and libraries, especially pandas and other data frames lib. The only issue with most of them is flat data. Data frames don't support complex data types, and you must flatten data each time.

pyiterable helps you read any data as a Python dictionary instead of flattening data. It makes it much easier to work with such data sources as JSON, NDJSON, or BSON files.

This code is used in several tools written by its author. It's command line tool undatum and data processing ETL engine datacrafter

Requirements

Python 3.8+

Installation

pip install iterabledata or use this repository

Documentation

In progress. Please see usage and examples.

Usage and examples

Read compressed CSV file

Read compressed csv.xz file

from iterable.helpers.detect import open_iterable

source = open_iterable('data.csv.xz')
n = 0
for row in iterable:
    n += 1
    # Add data processing code here
    if n % 1000 == 0: print('Processing %d' % (n))

Detect encoding and file delimiter

Detects encoding and delimiter of the selected CSV file and use it to open as iterable

from iterable.helpers.detect import open_iterable
from iterable.helpers.utils import detect_encoding, detect_delimiter

delimiter = detect_delimiter('data.csv')
encoding = detect_encoding('data.csv')

source = open_iterable('data.csv', iterableargs={'encoding' : encoding['encoding'], 'delimiter' : delimiter)
n = 0
for row in iterable:
    n += 1
    # Add data processing code here
    if n % 1000 == 0: print('Processing %d' % (n))

Convert Parquet file to BSON compressed with LZMA using pipeline

Uses pipeline class to iterate through parquet file and convert its selected fields to JSON lines (NDJSON)

from iterable.helpers.detect import open_iterable
from iterable.pipeline import pipeline

source = open_iterable('data/data.parquet')
destination = open_iterable('data/data.jsonl.xz', mode='w')

def extract_fields(record, state):
    out = {}
    record = dict(record)
    print(record)
    for k in ['name',]:
        out[k] = record[k]
    return out

def print_process(stats, state):
    print(stats)

pipeline(source, destination=destination, process_func=extract_fields, trigger_on=2, trigger_func=print_process, final_func=print_process, start_state={})

Convert gzipped JSON lines (NDJSON) file to BSON compressed with LZMA

Reads each row from JSON lines file using Gzip codec and writes BSON data using LZMA codec

from iterable.datatypes import JSONLinesIterable, BSONIterable
from iterable.codecs import GZIPCodec, LZMACodec


codecobj = GZIPCodec('data.jsonl.gz', mode='r', open_it=True)
iterable = JSONLinesIterable(codec=codecobj)        
codecobj = LZMACodec('data.bson.xz', mode='wb', open_it=False)
write_iterable = BSONIterable(codec=codecobj, mode='w')
n = 0
for row in iterable:
    n += 1
    if n % 10000 == 0: print('Processing %d' % (n))
    write_iterable.write(row)

More examples and tests

See tests for example usage and tests

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

iterabledata-1.0.4.tar.gz (19.4 kB view details)

Uploaded Source

File details

Details for the file iterabledata-1.0.4.tar.gz.

File metadata

  • Download URL: iterabledata-1.0.4.tar.gz
  • Upload date:
  • Size: 19.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.11.4

File hashes

Hashes for iterabledata-1.0.4.tar.gz
Algorithm Hash digest
SHA256 2ab401243ba4cc45100daa9b6aa7fcada77d8f7a6482affc0dd28237cdf1c8c7
MD5 6573d1ffe0a6bdb696eea03abaa7a4f2
BLAKE2b-256 14d1e8ffc2ebc613d1cbf4c698c499a776df153dd592b363369ce28429640830

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page