Skip to main content

ParquetDB is a lightweight database-like system built on top of Apache Parquet files using PyArrow.

Project description

ParquetDB

Documentation | PyPI | GitHub

ParquetDB is a Python library designed to bridge the gap between traditional file storage and fully fledged databases, all while wrapping the powerful PyArrow library to streamline data input and output. By leveraging the Parquet file format, ParquetDB provides the portability and simplicity of file-based data storage alongside advanced querying features typically found in database systems.

Table of Contents

Documentation

Check out the docs

Features

  • Simple Interface: Easy-to-use methods for creating, reading, updating, and deleting data.
  • Minimal Overhead: Achieve quick read/write speeds without the complexity of setting up or managing a larger database system.
  • Batching: Efficiently handle large datasets by batching operations.
  • Supports Complex Data Types: Handles nested and complex data types.
  • Schema Evolution: Supports adding new fields and updating schemas seamlessly.
  • Supports storing of python objects: ParquetDB can store python objects (objects and functions) using pickle.
  • Supports np.ndarrays: ParquetDB can store ndarrays.

Installation

Install ParquetDB using pip:

pip install parquetdb

Usage

Creating a Database

Initialize a ParquetDB instance by specifying the path the name of the dataset

from parquetdb import ParquetDB

db = ParquetDB(db_path='parquetdb')

Adding Data

Add data to the database using the create method. Data can be a dictionary, a list of dictionaries, or a Pandas DataFrame.

data = [
    {'name': 'Charlie', 'age': 28, 'occupation': 'Designer'},
    {'name': 'Diana', 'age': 32, 'occupation': 'Product Manager'}
]

db.create(data)

Normalizing

Normalization is a crucial process for ensuring the optimal performance and efficient management of data. In the context of file-based databases, like the ones used in ParquetDB, normalization helps balance the distribution of data across multiple files. Without proper normalization, files can end up with uneven row counts, leading to performance bottlenecks during operations like queries, inserts, updates, or deletions.

This method does not return anything but modifies the dataset directory in place, ensuring a more consistent and efficient structure for future operations.

from parquetdb import NormalizeConfig

db.normalize(
    normalize_config=NormalizeConfig(
    load_format='batches',      # Uses the batch generator to normalize
    batch_readahead=10,         # Controls the number of batches to load in memory a head of time.
    fragment_readahead=2,       # Controls the number of files to load in memory ahead of time.
    batch_size = 100000,        # Controls the batchsize when to use when normalizing. This will have impacts on amount of RAM consumed
    max_rows_per_file=500000,   # Controls the max number of rows per parquet file
    max_rows_per_group=500000)  # Controls the max number of rows per group parquet file
)

Reading Data

Read data from the database using the read method. You can filter data by IDs, specify columns, and apply filters.

# Read all data
all_employees = db.read()

# Read specific columns
names = db.read(columns=['name'])

# Read data with filters
from pyarrow import compute as pc

age_filter = pc.field('age') > 30
older_employees = db.read(filters=[age_filter])

Updating Data

Update existing records in the database using the update method. Each record must include the id field.

update_data = [
    {'id': 1, 'occupation': 'Senior Engineer'},
    {'id': 3, 'age': 29}
]

db.update(update_data)

Deleting Data

Delete records from the database by specifying their IDs.

db.delete(ids=[2, 4])

Citing ParquetDB

If you use ParquetDB in your work, please cite the following paper:

    @misc{lang2025parquetdblightweightpythonparquetbased,
      title={ParquetDB: A Lightweight Python Parquet-Based Database}, 
      author={Logan Lang and Eduardo Hernandez and Kamal Choudhary and Aldo H. Romero},
      year={2025},
      eprint={2502.05311},
      archivePrefix={arXiv},
      primaryClass={cs.DB},
      url={https://arxiv.org/abs/2502.05311}}

Contributing

Contributions are welcome! Please open an issue or submit a pull request on GitHub. More information can be found in the CONTRIBUTING.md file.

License

This project is licensed under the MIT License. See the LICENSE file for details.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

parquetdb-1.0.2.dev20.tar.gz (22.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

parquetdb-1.0.2.dev20-py3-none-any.whl (76.3 kB view details)

Uploaded Python 3

File details

Details for the file parquetdb-1.0.2.dev20.tar.gz.

File metadata

  • Download URL: parquetdb-1.0.2.dev20.tar.gz
  • Upload date:
  • Size: 22.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.23

File hashes

Hashes for parquetdb-1.0.2.dev20.tar.gz
Algorithm Hash digest
SHA256 742067ddd25e48d717000aeb24ddcffaa77a487fed0f3c4751e33eb1f75c0727
MD5 2783acf5e62a63b7359ec6999dfd71b7
BLAKE2b-256 e69f5a2efbde1962cc99023953e932a7cec9e8857ca45356a43bc180d4dc91bd

See more details on using hashes here.

File details

Details for the file parquetdb-1.0.2.dev20-py3-none-any.whl.

File metadata

File hashes

Hashes for parquetdb-1.0.2.dev20-py3-none-any.whl
Algorithm Hash digest
SHA256 3d226a1d08eff1d503efa0413db31a5e253b070ea8dfea0dbc43d48a4c5d20c8
MD5 879925ec2479221c879fc99c5d7ae485
BLAKE2b-256 737e46be58f92e335d2b56a6aba786ce7c56854acc03924876aa3f8fcd06fac6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page