Skip to main content

Package for working with extended CSV (XCSV) files

Project description

xcsv

xcsv is a package for reading and writing extended CSV files.

Extended CSV format

  • Extended header section of parseable atttributes, introduced by '#'.
  • Header row of variable name and units for each column.
  • Data rows.

Example

Extended header section

  • No leading/trailing whitespace.
  • Each line introduced by a comment ('#') character.
  • Each line contains a single header item, or a single element of a multi-line list header item.
  • Key/value separator ': '.
  • Multi-line values naturally continued over to the next lines following the line introducing the key.
  • Continuation lines that contain the delimiter character in the value must be escaped by a leading delimiter.
  • Preferably use a common vocabulary for attribute name, such as CF conventions.
  • Preferably include recommended attributes from Attribute Convention for Data Discovery (ACDD).
  • Preferably use units from Unified Code for Units of Measure and/or Udunits.
  • Units in parentheses.
  • Units are automatically parsed when they appear in parentheses at the end of a line. Hence, if you have non-units text in parentheses at the end of the line (e.g. when expanding an acronym), then ensure that the line doesn't end with a closing parenthesis to avoid the text being incorrectly parsed as units. A '.' would suffice.
    • This line: # latitude: -73.86 (degree_north) would parse correctly as a value/units dict: 'latitude': {'value': '-73.86', 'units': 'degree_north'}.
    • This line: # institution: BAS (British Antarctic Survey). would correctly avoid being parsed as a value/units dict because of the '.' as the last character.
  • Certain special keys are used to further process the data, for example the missing_value key.
# id: 1
# title: The title
# summary: This dataset...
# The second summary paragraph.
# : The third summary paragraph.  Escaped because it contains the delimiter in a URL https://dummy.domain
# authors: A B, C D
# institution: BAS (British Antarctic Survey).
# latitude: -73.86 (degree_north)
# longitude: -65.46 (degree_east)
# elevation: 1897 (m a.s.l.)
# [a]: 2012 not a complete year

Header row

  • No leading/trailing whitespace.
  • Preferably use a common vocabulary for variable name, such as CF conventions.
  • Units in parentheses.
  • Optional notes in square brackets, that reference an item in the extended header section.
time (year) [a],depth (m)

Data row

  • No leading/trailing whitespace.
2012,0.575

Automated post-processing of the data

Depending on the presence of special keys in the extended header section, these will be used to automatically post-process the data. To turn off this automatic behaviour, either remove or rename these keys, or set parse_metadata=False when reading in the data.

  • missing_value: This is used to define those values in the data that are to be considered as missing values. This is typically a value that is outside the domain of the data such as -999.99, or can be a symbolic value such as NA. All such values appearing in the data will be masked, appearing as an NA value to pandas (i.e. pd.isna(value) returns True). Note that pandas itself will automatically do this for certain values regardless of this key, such as for the strings NaN or NA, or the constant None.

A note on encodings

The default character set encoding is UTF-8, without a Byte Order Mark (BOM). If an extended CSV file has a different encoding, it can either be converted to UTF-8 (by using iconv, for example), or the encoding can be specified when opening the file (xcsv.File(filename, encoding=encoding)).

If the encoding of a file is UTF-8 and it begins with a BOM, then the BOM is silently skipped. This is necessary so that the extended header section is parsed correctly.

Install

The package can be installed from PyPI:

$ pip install xcsv

Using the package

The package has a general XCSV class, that has a metadata attribute that holds the parsed contents of the extended file header section and the parsed column headers from the data table, and a data attribute that holds the data table (including the column headers as-is).

The metadata attribute is a dict, with the following general structure:

{'header': {}, 'column_headers': {}}

and the data attribute is a pandas.DataFrame, and so has all the features of the pandas package.

The package also has a Reader class for reading an extended CSV file into an XCSV object, and similarly a Writer class for writing an XCSV object to a file in the extended CSV format. In addition there is a File class that provides a convenient context manager for reading and writing these files.

Examples

Simple read and print

Read in a file and print the contents to stdout. This shows how the contents of the extended CSV file are stored in the XCSV object. Note how multi-line values, such as summary here, are stored in a list. Given the following script called, say, simple_read.py:

import argparse

import xcsv

parser = argparse.ArgumentParser()
parser.add_argument('filename', help='filename.csv')
args = parser.parse_args()

with xcsv.File(args.filename) as f:
    content = f.read()
    print(content.metadata)
    print(content.data)

Running it would produce:

$ python3 simple_read.py example.csv
{'header': {'id': '1', 'title': 'The title', 'summary': ['This dataset...', 'The second summary paragraph.', 'The third summary paragraph.  Escaped because it contains the delimiter in a URL https://dummy.domain'], 'authors': 'A B, C D', 'institution': 'BAS (British Antarctic Survey).', 'latitude': {'value': '-73.86', 'units': 'degree_north'}, 'longitude': {'value': '-65.46', 'units': 'degree_east'}, 'elevation': {'value': '1897', 'units': 'm a.s.l.'}, '[a]': '2012 not a complete year'}, 'column_headers': {'time (year) [a]': {'name': 'time', 'units': 'year', 'notes': 'a'}, 'depth (m)': {'name': 'depth', 'units': 'm', 'notes': None}}}
   time (year) [a]  depth (m)
0             2012      0.575
1             2011      1.125
2             2010      2.225

Simple read and print with missing values

If the above example header section included the following:

# missing_value: -999.99

and the data section looked like:

time (year) [a],depth (m)
2012,0.575
2011,1.125
2010,2.225
2009,-999
2008,999
2007,-999.99
2006,999.99
2005,NA
2004,NaN

Running it would produce:

$ python3 simple_read.py missing_example.csv
{'header': {'id': '1', 'title': 'The title', 'summary': ['This dataset...', 'The second summary paragraph.', 'The third summary paragraph.  Escaped because it contains the delimiter in a URL https://dummy.domain'], 'authors': 'A B, C D', 'institution': 'BAS (British Antarctic Survey).', 'latitude': {'value': '-73.86', 'units': 'degree_north'}, 'longitude': {'value': '-65.46', 'units': 'degree_east'}, 'elevation': {'value': '1897', 'units': 'm a.s.l.'}, 'missing_value': '-999.99', '[a]': '2012 not a complete year'}, 'column_headers': {'time (year) [a]': {'name': 'time', 'units': 'year', 'notes': 'a'}, 'depth (m)': {'name': 'depth', 'units': 'm', 'notes': None}}}
   time (year) [a]  depth (m)
0             2012      0.575
1             2011      1.125
2             2010      2.225
3             2009   -999.000
4             2008    999.000
5             2007        NaN
6             2006    999.990
7             2005        NaN
8             2004        NaN

Note that the -999.99 value has been automatically masked as a missing value (shown as NaN in the printed pandas DataFrame), as well as the NA and NaN strings in the original data, which pandas automatically masks itself, irrespective of the missing_value header item.

Simple read and plot

Read a file and plot the data:

import argparse

import matplotlib.pyplot as plt

import xcsv

parser = argparse.ArgumentParser()
parser.add_argument('filename', help='filename.csv')
args = parser.parse_args()

with xcsv.File(args.filename) as f:
    content = f.read()
    content.data.plot(x='depth (m)', y='time (year) [a]')
    plt.show()

Simple read and write

Read a file in, manipulate the data in some way, and write this modified XCSV object out to a new file:

import argparse

import xcsv

parser = argparse.ArgumentParser()
parser.add_argument('in_filename', help='in_filename.csv')
parser.add_argument('out_filename', help='out_filename.csv')
args = parser.parse_args()

with xcsv.File(args.in_filename) as f:
    content = f.read()

# Manipulate the data...

with xcsv.File(args.out_filename, mode='w') as f:
    f.write(xcsv=content)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xcsv-0.5.0.tar.gz (17.2 kB view details)

Uploaded Source

Built Distribution

xcsv-0.5.0-py3-none-any.whl (14.6 kB view details)

Uploaded Python 3

File details

Details for the file xcsv-0.5.0.tar.gz.

File metadata

  • Download URL: xcsv-0.5.0.tar.gz
  • Upload date:
  • Size: 17.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.2 CPython/3.8.10 Linux/5.15.0-91-generic

File hashes

Hashes for xcsv-0.5.0.tar.gz
Algorithm Hash digest
SHA256 9ee06c6fafda1dcd97bc0a2e7e011bf56abcd5b7dcba6e202d8e0ac9e6f6e32d
MD5 a54dfaff8cb0948501ec68363999831d
BLAKE2b-256 5e249313f7d446cbb471ff4bc0ba4580452387a303b0368ad9f21cf784a017d1

See more details on using hashes here.

File details

Details for the file xcsv-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: xcsv-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 14.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.2 CPython/3.8.10 Linux/5.15.0-91-generic

File hashes

Hashes for xcsv-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9c5afbb1f12f9e006aba9c310b8dac2fbf3712f0eba333b04745976462410b6d
MD5 8ee2aad3dd6729923dac1c2aefb1ca8d
BLAKE2b-256 b5508d4ecfbd2c351875b714836a43dc0602ef352a4bd1fb2dbc0e9a74d580f0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page