Skip to main content

Pandas DataFrame subclasses that enforce structure and can self-organize.

Project description

Typed DataFrames

Version status License Python version compatibility Version on GitHub Version on PyPi
Build (Actions) Coverage (coveralls) Documentation status Maintainability Scrutinizer Code Quality Created with Tyrannosaurus

Pandas DataFrame subclasses that self-organize and serialize robustly.

Film = TypedDfs.typed("Film").require("name", "studio", "year").build()
df = Film.read_csv("file.csv")
assert df.columns.tolist() == ["name", "studio", "year"]
type(df)  # Film

Your types remember how to be read, including columns, dtypes, indices, and custom requirements. No index_cols=, header=, set_index, or astype needed.

Read and write any format:

path = input("input file? [.csv/.tsv/.tab/.json/.xml.bz2/.feather/.snappy.h5/...]")
df = Film.read_file(path)
df.write_file("output.snappy")

Need dataclasses?

instances = df.to_dataclass_instances()
Film.from_dataclass_instances(instances)

Save metadata?

df = df.set_attrs(dataset="piano")
df.write_file("df.csv", attrs=True)
df = Film.read_file("df.csv", attrs=True)
print(df.attrs)  # e.g. {"dataset": "piano")

Make dirs? Donโ€™t overwrite?

df.write_file("df.csv", mkdirs=True, overwrite=False)

Write / verify checksums?

df.write_file("df.csv", file_hash=True)
df = Film.read_file("df.csv", file_hash=True)  # fails if wrong

Get example datasets?

print(ExampleDfs.penguins().df)
#    species     island  bill_length_mm  ...  flipper_length_mm  body_mass_g     sex
# 0    Adelie  Torgersen            39.1  ...              181.0       3750.0    MALE

Pretty-print the obvious way?

df.pretty_print(to="all_data.md.zip")
wiki_txt = df.pretty_print(fmt="mediawiki")

All standard DataFrame methods remain available. Use .of(df) to convert to your type, or .vanilla() for a plain DataFrame.

Read the docs ๐Ÿ“š for more info and examples.

๐Ÿ› Pandas serialization bugs fixed

Pandas has several issues with serialization.

See: Fixed issues Depending on the format and columns, these issues occur:
  • columns being silently added or dropped,
  • errors on either read or write of empty DataFrames,
  • the inability to use DataFrames with indices in Feather,
  • writing to Parquet failing with half-precision,
  • lingering partially written files on error,
  • the buggy xlrd being preferred by read_excel,
  • the buggy odfpy also being preferred,
  • writing a file and reading it back results in a different DataFrame,
  • you canโ€™t write fixed-width format,
  • and the platform text encoding being used rather than utf-8.
  • invalid JSON is written via the built-in json library

๐ŸŽ Other features

See more in the guided walkthrough โœ๏ธ

See: Short feature list
  • Dtype-aware natural sorting
  • UTF-8 by default
  • Near-atomicity of read/write
  • Matrix-like typed dataframes and methods (e.g. matrix.is_symmetric())
  • DataFrame-compatible frozen, hashable, ordered collections (dict, list, and set)
  • Serialize JSON robustly, preserving NaN, inf, โˆ’inf, enums, timezones, complex numbers, etc.
  • Serialize more formats like TOML and INI
  • Interpreting paths and formats (e.g. FileFormat.split("dir/myfile.csv.gz").compression # gz)
  • Generate good CLI help text for input DataFrames
  • Parse/verify/add/update/delete files in a .shasum-like file

๐Ÿ’” Limitations

See: List of limitations
  • Multi-level columns are not yet supported.
  • Columns and index levels cannot share names.
  • Duplicate column names are not supported. (These are strange anyway.)
  • A typed DF cannot have columns "level_0", "index", or "Unnamed: 0".
  • inplace is forbidden in some functions; avoid it or use .vanilla().

๐Ÿ”Œ Serialization support

TypedDfs provides the methods read_file and write_file, which guess the format from the filename extension. For example, this will convert a gzipped, tab-delimited file to Feather:

TastyDf = typeddfs.typed("TastyDf").build()
TastyDf.read_file("myfile.tab.gz").write_file("myfile.feather")

Pandas does most of the serialization, but some formats require extra packages. Typed-dfs specifies extras to help you get required packages and with compatible versions.

Here are the extras:

  • feather: Feather (uses: pyarrow)
  • parquet: Parquet (e.g. .snappy) (uses: pyarrow)
  • xml (uses: lxml)
  • excel: Excel and LibreOffice .xlsx/.ods/.xls, etc. (uses: openpyxl, defusedxml)
  • toml: TOML (uses: tomlkit)
  • html (uses: html5lib, beautifulsoup4)
  • xlsb: rare binary Excel file (uses: pyxlsb)
  • HDF5 {no extra provided} (use: tables)

For example, for Feather and TOML support use: typeddfs[feather,toml]
As a shorthand for all formats, use typeddfs[all].

๐Ÿ“Š Serialization in-depth

See: Full table
format packages extra sanity speed file sizes
Feather pyarrow feather +++ ++++ +++
Parquet pyarrow or fastparquet โ€  parquet ++ +++ ++++
csv/tsv none none ++ โˆ’โˆ’ โˆ’โˆ’
flexwf โ€ก none none ++ โˆ’โˆ’ โˆ’โˆ’
.fwf none none + โˆ’โˆ’ โˆ’โˆ’
json none none โˆ’โˆ’ โˆ’โˆ’โˆ’ โˆ’โˆ’โˆ’
xml lxml xml โˆ’ โˆ’โˆ’โˆ’ โˆ’โˆ’โˆ’
.properties none none โˆ’โˆ’ โˆ’โˆ’ โˆ’โˆ’
toml tomlkit toml โˆ’โˆ’ โˆ’โˆ’ โˆ’โˆ’
INI none none โˆ’โˆ’โˆ’ โˆ’โˆ’ โˆ’โˆ’
.lines none none ++ โˆ’โˆ’ โˆ’โˆ’
.npy none none โˆ’ + +++
.npz none none โˆ’ + +++
.html html5lib,beautifulsoup4 html โˆ’โˆ’ โˆ’โˆ’โˆ’ โˆ’โˆ’โˆ’
pickle none none โˆ’โˆ’ โˆ’โˆ’โˆ’ โˆ’โˆ’โˆ’
XLSX openpyxl,defusedxml excel + โˆ’โˆ’ +
ODS openpyxl,defusedxml excel + โˆ’โˆ’ +
XLS openpyxl,defusedxml excel โˆ’โˆ’ โˆ’โˆ’ +
XLSB pyxlsb xlsb โˆ’โˆ’ โˆ’โˆ’ ++
HDF5 tables hdf5 โˆ’โˆ’ โˆ’ ++

โš  Note: The hdf5 extra is currently disabled.

See: serialization notes
  • โ€  fastparquet can be used instead. It is slower but much smaller.
  • Parquet only supports str, float64, float32, int64, int32, and bool. Other numeric types are automatically converted during write.
  • โ€ก .flexwf is fixed-width with optional delimiters.
  • JSON has inconsistent handling of None. (orjson is more consistent).
  • XML requires Pandas 1.3+.
  • Not all JSON, XML, TOML, and HDF5 files can be read.
  • .ini and .properties can only be written with exactly 2 columns + index levels: a key and a value. INI keys are in the form section.name.
  • .lines can only be written with exactly 1 column or index level.
  • .npy and .npz only serialize numpy objects. They are not supported in read_file and write_file.
  • .html is not supported in read_file and write_file.
  • Pickle is insecure and not recommended.
  • Pandas supports odfpy for ODS and xlrd for XLS. In fact, it prefers those. However, they are very buggy; openpyxl is much better.
  • XLSM, XLTX, XLTM, XLS, and XLSB files can contain macros, which Microsoft Excel will ingest.
  • XLS is a deprecated format.
  • XLSB is not fully supported in Pandas.
  • HDF may not work on all platforms yet due to a tables issue.

Feather offers massively better performance over CSV, gzipped CSV, and HDF5 in read speed, write speed, memory overhead, and compression ratios. Parquet typically results in smaller file sizes than Feather at some cost in speed. Feather is the preferred format for most cases.

๐Ÿ”’ Security

Refer to the security policy.

๐Ÿ“ Extra notes

See: Pinned versions

Dependencies in the extras only have version minimums, not maximums. For example, typed-dfs requires pyarrow >= 4. natsort is also only assigned a minimum version number. This means that the result of typed-dfโ€™s sort_natural could change. To fix this, pin natsort to a specific major version; e.g. natsort = "^8" with Poetry or natsort>=8,<9 with pip.

๐Ÿ Contributing

Typed-Dfs is licensed under the Apache License, version 2.0. New issues and pull requests are welcome. Please refer to the contributing guide. Generated with Tyrannosaurus.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

typeddfs-0.16.5.tar.gz (77.9 kB view details)

Uploaded Source

Built Distribution

typeddfs-0.16.5-py3-none-any.whl (92.4 kB view details)

Uploaded Python 3

File details

Details for the file typeddfs-0.16.5.tar.gz.

File metadata

  • Download URL: typeddfs-0.16.5.tar.gz
  • Upload date:
  • Size: 77.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.9.10 Linux/5.11.0-1028-azure

File hashes

Hashes for typeddfs-0.16.5.tar.gz
Algorithm Hash digest
SHA256 631296a252ddb614c997596d75b268768ab6c225becc434311f79e9027d73d15
MD5 f97a009aa64c13e52bc8e2307dbb1d64
BLAKE2b-256 df6923b4c90de17493d82dcd65f094b636dc389fc3cd99342a109678bb06e101

See more details on using hashes here.

File details

Details for the file typeddfs-0.16.5-py3-none-any.whl.

File metadata

  • Download URL: typeddfs-0.16.5-py3-none-any.whl
  • Upload date:
  • Size: 92.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.9.10 Linux/5.11.0-1028-azure

File hashes

Hashes for typeddfs-0.16.5-py3-none-any.whl
Algorithm Hash digest
SHA256 6c2a0a98a2bdd6bb941219864abc22874e30d1aa720e4e86dc77061db863ee98
MD5 977e483f603f7921eb4ae6abd4fe7969
BLAKE2b-256 00d206bf2e9884a5758e0a999504a48462ff40a8777f3a602f83611e011fdb55

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page