Skip to main content

CLI data plumbing tool

Project description

tableconv

tableconv converts tabular data from any format to any format.

Install

pip install tableconv

Usage

usage: tableconv SOURCE_URL [-q QUERY_SQL] [-o DEST_URL]

positional arguments:
  SOURCE_URL            Specify the data source URL.

optional arguments:
  -h, --help            show this help message and exit
  -q SOURCE_QUERY, --query SOURCE_QUERY
                        Query to run on the source. Even for non-SQL datasources (e.g. csv or
                        json), SQL querying is still supported, try `SELECT * FROM data`.
  -F INTERMEDIATE_FILTER_SQL, --filter INTERMEDIATE_FILTER_SQL
                        Filter (i.e. transform) the input data using a SQL query operating on the
                        dataset in memory using DuckDB SQL.
  -o DEST_URL, --dest DEST_URL, --out DEST_URL
                        Specify the data destination URL. If this destination already exists, be
                        aware that the default behavior is to overwrite.
  -i, --interactive     Enter interactive REPL query mode.
  --open                Open resulting file/url (not supported for all destination types)
  -s SCHEMA_COERCION, --schema SCHEMA_COERCION, --coerce-schema SCHEMA_COERCION
                        Coerce source schema (experimental feature)
  --restrict-schema     Exclude all columns not included in the schema definition (experimental
                        feature)
  -v, --verbose, --debug
                        Show debug details, including API calls and error sources.
  --version             show program's version number and exit
  --quiet               Only display errors.

supported url schemes:
  ascii:- (dest only)
  asciibox:- (dest only)
  asciifancygrid:- (dest only)
  asciigrid:- (dest only)
  asciilite:- (dest only)
  asciipipe:- (dest only)
  asciiplain:- (dest only)
  asciipresto:- (dest only)
  asciipretty:- (dest only)
  asciipsql:- (dest only)
  asciisimple:- (dest only)
  awsathena://eu-central-1 
  awsdynamodb://eu-central-1/example_table (source only)
  csa:- 
  example.csv 
  example.dta 
  example.feather 
  example.h5 
  example.hdf5 
  example.json 
  example.jsonl 
  example.ldjson 
  example.ndjson 
  example.orc (source only)
  example.parquet 
  example.py 
  example.python 
  example.tsv 
  example.xls 
  example.xlsx 
  example.yaml 
  example.yml 
  gsheets://:new: 
  html:- (dest only)
  jiracloud://mycorpname (source only)
  jiraformat:- (dest only)
  jsonarray:- 
  latex:- (dest only)
  list:- 
  markdown:- (dest only)
  md:- (dest only)
  mediawikiformat:- (dest only)
  moinmoinformat:- (dest only)
  mssql://127.0.0.1:5432/example_db 
  mysql://127.0.0.1:5432/example_db 
  nestedlist:- (source only)
  oracle://127.0.0.1:5432/example_db 
  postgis://127.0.0.1:5432/example_db 
  postgres://127.0.0.1:5432/example_db 
  postgresql://127.0.0.1:5432/example_db 
  pylist:- 
  pythonlist:- 
  rst:- (dest only)
  smartsheet://SHEET_ID (source only)
  sql_values:- (dest only)
  sqlite3:///tmp/example.db 
  sqlite:///tmp/example.db 
  sumologic://?from=2021-03-01T00:00:00Z&to=2021-05-03T00:00:00Z (source only)
  tex:- (dest only)
  yamlsequence:- 

Examples

Basic Conversion

Convert JSON to CSV

tableconv test.json -o test.csv

Convert CSV to JSON

tableconv test.csv -o test.json

Dump a Postgres table as JSON

tableconv postgresql://192.168.0.10:5432/test_db/my_table -o my_table.json

Display a parquet file's data in a human-readable format

tableconv test.parquet -o ascii:-

Convert CSV to a Markdown Table

tableconv test.csv -o md:-

Data Transformation

Dump the first 100 rows of a postgres table as JSON

tableconv postgresql://192.168.0.10:5432/test_db -q 'SELECT * FROM my_table ORDER BY id LIMIT 100' -o my_table.json

Copy a few columns from one CSV into a new CSV. (in general, all functionality works on all of the supported data formats. So you can of course query with SQL on an Oracle database but it's also supported to query with SQL on JSON, SQL on Excel, and, here SQL on CSV)

tableconv test.csv -q 'SELECT time, name FROM data ORDER BY time DESC' -o output.csv

Append a Few Columns From a CSV Into MySQL

tableconv test.csv -q 'SELECT time, name FROM data ORDER BY time DESC' -o mysql://localhost:3306/test_db/my_table?if_exists=append

Extract a report from a SQLite database into a new Google Spreadsheet

tableconv sqlite3://my_db.db -q 'SELECT name, COUNT(*) from occurrences ORDER BY 2 DESC LIMIT 10' -o "gsheets://:new:/?name=top_occurrences_$(date +'%Y_%m_%d')"

Interactive Mode

Launch an interactive SQL shell to inspect data from a CSV file in the terminal

tableconv test.csv -i

Psuedo-Tabular Data Operations

Arrays: Arrays can be thought of as one dimensional tables, so tableconv has strong support for array formats too. Here is an example of converting a copy/pasted newline-deliminated list into a list in the Python list syntax.

pbpaste | tableconv list:- -o pylist:-

Or in YAML's sequence syntax:

pbpaste | tableconv list:- -o yamlsequence:-

Or as a full single-dimensional CSV table:

pbpaste | tableconv list:- -o csv:-

Details

As a prototype, tableconv is usable as a quick and dirty CLI ETL tool for converting data between any of the formats, or usable for performing basic bulk data transformations and joins defined in a unified language (SQL) but operating across disparate data in wildly different formats. That is the immediate value proposition of tableconv, but it was created within the mental framework of a larger vision: The tableconv vision of computing is that all software fundamentally interfaces via data tables; that all UIs and APIs can be interpreted as data frames or data tables. Instead of requiring power users to learn interface after interface and build their own bespoke tooling to extract and manipulate the data at scale in each interface, the world needs a highly interoperable operating system level client for power users to directly interact with, join, and manipulate the data with SQL (or similar) using the universal "table" abstraction provided in a consistent UI across each service. Tableconv is that tool. It is meant to have adapters written to support any/all services and data formats.

However, this is just a prototype. The software is slow in all ways and memory+cpu intensive. It has no streaming support and loads all data into memory before converting it. Its most efficient adapters cannot handle tables over 10 million cells, and the least efficient cannot handle over 100000 cells. Schemas can migrate inconsistently depending upon the data available. It has experimental features that will not work reliably, such as schema management, the unorthodox URL scheme, and special array (1 dimensional table) support. All parts of the user interface are expected to be overhauled at some point. The code quality is mediocre and inconsistent. Most obscure adapter options are untested. It has no story or documentation for service authentication, aside from SQL DBs. Lastly, the documentation is weak and no documentation has been written to document the standard options available for each adapter, nor documentation of any adapter-specific options.

Python API

Quickstart Example: Basic API usage: Replicating a typical CLI command using the API

In [1]: import tableconv

In [2]: # tableconv test.csv -q 'SELECT time, name FROM data ORDER BY time DESC' -o gsheets://:new:/?name=test

In [3]: tableconv.load_url('test.csv', query='SELECT time, name FROM data ORDER BY time DESC').dump_to_url('gsheets://:new:', params={'name': 'test'})

Quickstart Example: More advanced API usage: Importing in data from an arbitrary URL to a python dictionary

In [1]: import tableconv

In [2]: tableconv.load_url('postgresql://localhost:5432/test_db/cities').as_dict_records()
Out[2]:
[
 {'LatD': 41, 'LatM': 5, 'LatS': 59, 'NS': 'N', 'LonD': 80, 'LonM': 39, 'LonS': 0, 'EW': 'W', 'City': 'Youngstown', 'State': 'OH'},
 {'LatD': 42, 'LatM': 52, 'LatS': 48, 'NS': 'N', 'LonD': 97, 'LonM': 23, 'LonS': 23, 'EW': 'W', 'City': 'Yankton', 'State': 'SD'},
 [...]
]

SDK API Reference Documentation

(Reference documentation pending)

Main Influences

  • odo
  • Singer
  • ODBC/JDBC
  • osquery

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tableconv-1.9731.20211224.tar.gz (36.3 kB view details)

Uploaded Source

Built Distribution

tableconv-1.9731.20211224-py3-none-any.whl (42.2 kB view details)

Uploaded Python 3

File details

Details for the file tableconv-1.9731.20211224.tar.gz.

File metadata

  • Download URL: tableconv-1.9731.20211224.tar.gz
  • Upload date:
  • Size: 36.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.8.7

File hashes

Hashes for tableconv-1.9731.20211224.tar.gz
Algorithm Hash digest
SHA256 045aa34039b615af67e467a28411a309b24f72aafea5fdbb1b003c59b21bbfa3
MD5 bb36ab73144668329b41a509dcea9daa
BLAKE2b-256 ab93a94405233b47f0df0b11f6273ed2f2fc8a953a27d198d2cb518d16d15bcb

See more details on using hashes here.

File details

Details for the file tableconv-1.9731.20211224-py3-none-any.whl.

File metadata

  • Download URL: tableconv-1.9731.20211224-py3-none-any.whl
  • Upload date:
  • Size: 42.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.8.7

File hashes

Hashes for tableconv-1.9731.20211224-py3-none-any.whl
Algorithm Hash digest
SHA256 70268a05833b82c72ffccbec453379e505d1bfefd9591b07181101649b7dca70
MD5 6a58657a9354f296afb3231fe5a56ac1
BLAKE2b-256 868325e1bb10f15f49106f3acdeac4af165e9af1b37bdb2bb88ba2d016c21764

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page