Skip to main content

CLI data plumbing tool

Project description

tableconv

tableconv is a prototype of software to convert tabular data from any format to any format.

Install

pipx install tableconv

(or: pip install tableconv)

Examples

Basic Conversion

Convert JSON to CSV

tableconv test.json -o test.csv

Convert CSV to JSON

tableconv test.csv -o test.json

Dump a Postgres table as JSON

tableconv postgresql://192.168.0.10:5432/test_db/my_table -o my_table.json

Display a parquet file's data in a human-readable format

tableconv test.parquet -o ascii:-

Convert CSV to a Markdown Table

tableconv test.csv -o md:-

Data Transformation

Dump the first 100 rows of a postgres table as JSON

tableconv postgresql://192.168.0.10:5432/test_db -q 'SELECT * FROM my_table ORDER BY id LIMIT 100' -o my_table.json

Copy a few columns from one CSV into a new CSV. (in general, all functionality works on all of the supported data formats. So you can of course query with SQL on an Oracle database but it's also supported to query with SQL on JSON, SQL on Excel, and, here SQL on CSV)

tableconv test.csv -q 'SELECT time, name FROM data ORDER BY time DESC' -o output.csv

Append a few columns from a CSV into MySQL

tableconv test.csv -q 'SELECT time, name FROM data ORDER BY time DESC' -o mysql://localhost:3306/test_db/my_table?if_exists=append

Extract a report from a SQLite database into a new Google Spreadsheet

tableconv sqlite3://my_db.db -q 'SELECT name, COUNT(*) from occurrences ORDER BY 2 DESC LIMIT 10' -o "gsheets://:new:/?name=top_occurrences_$(date +'%Y_%m_%d')"

Interactive Mode

Launch an interactive SQL shell to inspect data from a CSV file in the terminal

tableconv test.csv -i

Psuedo-Tabular Data Operations

Arrays: Arrays can be thought of as one dimensional tables, so tableconv has strong support for array formats too. Here is an example of converting a copy/pasted newline-deliminated list into a list in the Python list syntax.

pbpaste | tableconv list:- -o pylist:-

Or in YAML's sequence syntax:

pbpaste | tableconv list:- -o yamlsequence:-

Or as a full single-dimensional CSV table:

pbpaste | tableconv list:- -o csv:-

Details

As a prototype, tableconv is usable as a quick and dirty CLI ETL tool for converting data between any of the formats, or usable for performing basic bulk data transformations and joins defined in a unified language (SQL) but operating across disparate data in wildly different formats. That is the immediate value proposition of tableconv, but it was created within the mental framework of a larger vision: The tableconv vision of computing is that all software fundamentally interfaces via data tables; that all UIs and APIs can be interpreted as data frames or data tables. Instead of requiring power users to learn interface after interface and build their own bespoke tooling to extract and manipulate the data at scale in each interface, the world needs a highly interoperable operating system level client for power users to directly interact with, join, and manipulate the data with SQL (or similar) using the universal "table" abstraction provided in a consistent UI across each service. Tableconv is that tool. It is meant to have adapters written to support any/all services and data formats.

However, this is just a prototype. The software is slow in all ways and memory+cpu intensive. It has no streaming support and loads all data into memory before converting it. Its most efficient adapters cannot handle tables over 10 million cells, and the least efficient cannot handle over 100000 cells. Schemas can migrate inconsistently depending upon the data available. It has experimental features that will not work reliably, such as schema management, the unorthodox URL scheme, and special array (1 dimensional table) support. All parts of the user interface are expected to be overhauled at some point. The code quality is mediocre, inconsistent, and bug-prone. Most obscure adapter options are untested. It has no story or documentation for service authentication, aside from SQL DBs. Lastly, the documentation is so weak that no documentation exists documenting the standard options available for adapters adapter, nor documentation of any adapter-specific options.

Usage

usage: tableconv SOURCE_URL [-q QUERY_SQL] [-o DEST_URL]

positional arguments:
  SOURCE_URL            Specify the data source URL.

options:
  -h, --help            show this help message and exit
  -q SOURCE_QUERY, -Q SOURCE_QUERY, --query SOURCE_QUERY
                        Query to run on the source. Even for non-SQL datasources (e.g. csv or
                        json), SQL querying is still supported, try `SELECT * FROM data`.
  -F INTERMEDIATE_FILTER_SQL, --filter INTERMEDIATE_FILTER_SQL
                        Filter (i.e. transform) the input data using a SQL query operating on the
                        dataset in memory using DuckDB SQL.
  -o DEST_URL, --dest DEST_URL, --out DEST_URL
                        Specify the data destination URL. If this destination already exists, be
                        aware that the default behavior is to overwrite.
  -i, --interactive     Enter interactive REPL query mode.
  --open                Open resulting file/url in the operating system desktop environment. (not
                        supported for all destination types)
  --schema SCHEMA_COERCION, --coerce-schema SCHEMA_COERCION
                        Coerce source schema according to a schema definition. (WARNING:
                        experimental feature)
  --restrict-schema     Exclude all columns not included in the SCHEMA_COERCION definition.
                        (WARNING: experimental feature)
  -v, --verbose, --debug
                        Show debug details, including API calls and error sources.
  --version             Show version number and exit
  --quiet               Only display errors.
  --print, --print-dest
                        Print resulting URL/path to stdout, for chaining with other commands.
  --daemon              Tableconv startup time (python startup time) is slow. To mitigate that,
                        you can first run tableconv as a daemon, and then all future invocations
                        (while daemon is still alive) will be fast. (WARNING: experimental
                        feature)

supported url schemes:
  ascii:- (dest only)
  asciibox:- (dest only)
  asciifancygrid:- (dest only)
  asciigrid:- (dest only)
  asciilite:- (dest only)
  asciipipe:- (dest only)
  asciiplain:- (dest only)
  asciipresto:- (dest only)
  asciipretty:- (dest only)
  asciipsql:- (dest only)
  asciisimple:- (dest only)
  awsathena://eu-central-1 
  awsdynamodb://eu-central-1/example_table (source only)
  awslogs://eu-central-1//aws/lambda/example-function (source only)
  csa:- 
  example.csv 
  example.dta 
  example.feather 
  example.fixedwidth 
  example.fwf 
  example.h5 
  example.hdf5 
  example.html 
  example.json 
  example.jsonl 
  example.jsonlines 
  example.ldjson 
  example.ndjson 
  example.numbers (source only)
  example.orc (source only)
  example.parquet 
  example.py 
  example.python 
  example.tsv 
  example.xls 
  example.xlsx 
  example.yaml 
  example.yml 
  gsheets://:new: 
  jiracloud://mycorpname (source only)
  jiraformat:- (dest only)
  jsonarray:- 
  latex:- (dest only)
  list:- 
  markdown:- (dest only)
  md:- (dest only)
  mediawikiformat:- (dest only)
  moinmoinformat:- (dest only)
  mssql://127.0.0.1:5432/example_db 
  mysql://127.0.0.1:5432/example_db 
  nestedlist:- (source only)
  oracle://127.0.0.1:5432/example_db 
  postgis://127.0.0.1:5432/example_db 
  postgres://127.0.0.1:5432/example_db 
  postgresql://127.0.0.1:5432/example_db 
  pylist:- 
  pythonlist:- 
  rst:- (dest only)
  smartsheet://SHEET_ID (source only)
  sql_values:- (dest only)
  sqlite3:///tmp/example.db 
  sqlite:///tmp/example.db 
  sumologic://?from=2021-03-01T00:00:00Z&to=2021-05-03T00:00:00Z (source only)
  tex:- (dest only)
  yamlsequence:- 

help & support:
  https://github.com/personalcomputer/tableconv/issues

Python API

Quickstart Example: Basic API usage: Replicating a typical CLI command using the API

In [1]: import tableconv

In [2]: # tableconv test.csv -q 'SELECT time, name FROM data ORDER BY time DESC' -o gsheets://:new:/?name=test

In [3]: tableconv.load_url('test.csv', query='SELECT time, name FROM data ORDER BY time DESC').dump_to_url('gsheets://:new:', params={'name': 'test'})

Quickstart Example: More advanced API usage: Importing in data from an arbitrary URL to a python dictionary

In [1]: import tableconv

In [2]: tableconv.load_url('postgresql://localhost:5432/test_db/cities').as_dict_records()
Out[2]:
[
 {'LatD': 41, 'LatM': 5, 'LatS': 59, 'NS': 'N', 'LonD': 80, 'LonM': 39, 'LonS': 0, 'EW': 'W', 'City': 'Youngstown', 'State': 'OH'},
 {'LatD': 42, 'LatM': 52, 'LatS': 48, 'NS': 'N', 'LonD': 97, 'LonM': 23, 'LonS': 23, 'EW': 'W', 'City': 'Yankton', 'State': 'SD'},
 [...]
]

SDK API Reference Documentation

(Reference documentation pending)

Main Influences

  • odo
  • Singer
  • ODBC/JDBC
  • osquery

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tableconv-1.9947.20231219.tar.gz (47.0 kB view details)

Uploaded Source

Built Distribution

tableconv-1.9947.20231219-py3-none-any.whl (56.1 kB view details)

Uploaded Python 3

File details

Details for the file tableconv-1.9947.20231219.tar.gz.

File metadata

  • Download URL: tableconv-1.9947.20231219.tar.gz
  • Upload date:
  • Size: 47.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for tableconv-1.9947.20231219.tar.gz
Algorithm Hash digest
SHA256 3a4c3c9def653f5fcb3cfc5ece02b97e938d4709c41b8984b6b0dc998682bb30
MD5 cba44683718c02cf7fb98d38aef08c65
BLAKE2b-256 199e2636027e998b133393832700daabad415883f71711701339f44edfd08937

See more details on using hashes here.

File details

Details for the file tableconv-1.9947.20231219-py3-none-any.whl.

File metadata

File hashes

Hashes for tableconv-1.9947.20231219-py3-none-any.whl
Algorithm Hash digest
SHA256 5b7a798e053ed48cff0e38ea8969c8338d995ab4cdac17ab2f3330d5d25eb1ea
MD5 adaf14449569d5d8958dd2f2362fc02e
BLAKE2b-256 3af4fe9ca93a7cea2ec2f1b41f9dae9f09354f9ced4781ed6df43707c2eec257

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page