Skip to main content

pyreaddbc package

Project description

pyreaddbc

pyreaddbc is a Python library for working with DBase database file. Legacy systems from the Brazilian Ministry of Health still uses DBF and DBC formats to Publicize data. This package was developed to help PySUS to extract data from these formats into more modern ones. Pyreaddbc can also be used to convert DBC files from any other source."

Installation

You can install pyreaddbc using pip:

pip install pyreaddbc

Usage

Reading DBC Files

Extracting the DBF from a DBC may require to specify the encoding of the original data, if known.

import pyreaddbc

dfs = pyreaddbc.read_dbc("LTPI2201.dbc", encoding="iso-8859-1")

Exporting to CSV.GZ

To export a DataFrame to a compressed CSV file (CSV.GZ), you can use pandas:

import pyreaddbc

df = pyreaddbc.read_dbc("./LTPI2201.dbc", encoding="iso-8859-1")
df.to_csv("LTPI2201.csv.gz", compression="gzip", index=False)

Exporting to Parquet

To export a DataFrame to a Parquet file, you can use the pyarrow library:

import pyreaddbc
import pyarrow.parquet as pq
import pandas as pd
from pathlib import Path

# Read DBC file and convert to DataFrame
df = pyreaddbc.read_dbc("./LTPI2201.dbc", encoding="iso-8859-1")

# Export to CSV.GZ
df.to_csv("LTPI2201.csv.gz", compression="gzip", index=False)

# Export to Parquet
pq.write_table(pa.Table.from_pandas(df), "parquets/LTPI2201.parquet")

# Read the Parquet files and decode DataFrame columns
parquet_dir = Path("parquets")
parquets = parquet_dir.glob("*.parquet")

chunks_list = [
    pd.read_parquet(str(f), engine='fastparquet') for f in parquets
]

# Concatenate DataFrames
df_parquet = pd.concat(chunks_list, ignore_index=True)

License

GNU Affero General Public License (AGPL-3.0)

This license ensures that the software remains open-source and free to use, modify, and distribute while requiring that any changes or enhancements made to the codebase are also made available to the community under the same terms.

Acknowledge
============

This program decompresses .dbc files to .dbf. This code is based on the work of Mark Adler madler@alumni.caltech.edu (zlib/blast), Pablo Fonseca (https://github.com/eaglebh/blast-dbf).

PySUS has further extended and adapted this code to create pyreaddbc. The original work of Mark Adler and Pablo Fonseca is much appreciated for its contribution to this project.

Note: pyreaddbc is maintained with funding from AlertaDengue.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyreaddbc-1.2.0.tar.gz (57.7 kB view hashes)

Uploaded Source

Built Distribution

pyreaddbc-1.2.0-cp311-cp311-manylinux_2_37_x86_64.whl (64.7 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.37+ x86-64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page