Skip to main content

A comprehensive toolkit for molecular data curation, validation, cleaning, and normalization

Project description

MEHC Curation

A comprehensive Python toolkit for molecular data curation, including validation, cleaning, normalization, and refinement pipelines.

Features

  • Validation: Validate SMILES strings and remove unwanted molecular types (mixtures, inorganics, organometallics)
  • Cleaning: Remove salts and neutralize charged molecules
  • Normalization: Normalize tautomers and stereoisomers
  • Refinement: Complete pipeline orchestrating all stages
  • Parallel Processing: Efficient parallel processing using all available CPUs by default
  • Comprehensive Reporting: Generate detailed reports for each processing stage

Installation

Prerequisites

Before installing mehc-curation, you need to install RDKit, which is best installed via conda:

conda install -c conda-forge rdkit

Install from PyPI

pip install mehc-curation

Install from source

git clone https://github.com/biochem-data-sci/mehc-curation.git
cd mehc-curation
pip install -e .

Quick Start

Python API

import pandas as pd
from mehc_curation.validation import ValidationStage
from mehc_curation.cleaning import CleaningStage
from mehc_curation.normalization import NormalizationStage
from mehc_curation.refinement import RefinementStage

# Load your SMILES data
df = pd.read_csv("your_data.csv")

# Validation
validator = ValidationStage(df)
validated_df = validator.complete_validation()

# Cleaning
cleaner = CleaningStage(validated_df)
cleaned_df = cleaner.complete_cleaning()

# Normalization
normalizer = NormalizationStage(cleaned_df)
normalized_df = normalizer.complete_normalization()

# Complete refinement pipeline
refiner = RefinementStage(df)
refined_df = refiner.complete_refinement(
    output_dir="./output",
    get_report=True
)

Command Line Interface

# Validation
python -m mehc_curation.validation -i input.csv -o output/ -c 5

# Cleaning
python -m mehc_curation.cleaning -i input.csv -o output/ -c 3

# Normalization
python -m mehc_curation.normalization -i input.csv -o output/ -c 3

# Complete refinement
python -m mehc_curation.refinement -i input.csv -o output/ --get_report

Modules

Validation Module

Validates SMILES strings and removes unwanted molecular types:

  • validate_smi(): Validate SMILES strings
  • rm_mixture(): Remove mixture compounds
  • rm_inorganic(): Remove inorganic compounds
  • rm_organometallic(): Remove organometallic compounds
  • complete_validation(): Run all validation steps

Cleaning Module

Cleans SMILES strings:

  • cl_salt(): Remove salts from SMILES
  • neutralize(): Neutralize charged molecules
  • complete_cleaning(): Run all cleaning steps

Normalization Module

Normalizes SMILES strings:

  • detautomerize(): Normalize tautomers
  • destereoisomerize(): Remove stereoisomers
  • complete_normalization(): Run all normalization steps

Refinement Module

Complete refinement pipeline:

  • complete_refinement(): Orchestrates validation, cleaning, and normalization stages

Configuration

CPU Usage

By default, the library uses all available CPUs (n_cpu=-1). You can specify the number of CPUs:

# Use all CPUs (default)
refiner.complete_refinement(n_cpu=-1)

# Use specific number of CPUs
refiner.complete_refinement(n_cpu=4)

# Use single CPU
refiner.complete_refinement(n_cpu=1)

Output Directories

  • output_dir is optional for every stage. If you omit it, data stays in memory and any generated reports are written to the current working directory.
  • When you do provide an output_dir, the folder will be created automatically if it does not exist, and both CSV outputs and reports are saved beneath it.

Duplicate Handling

  • param_deduplicate now defaults to True for all validation, cleaning, and normalization entry points so that duplicate rows are removed automatically unless you opt out.

Requirements

  • Python >= 3.7
  • pandas >= 1.3.0
  • parallel-pandas >= 0.2.8
  • RDKit (install via conda: conda install -c conda-forge rdkit)

License

MIT License - see LICENSE file for details

Citation

If you use this library in your research, please cite:

@software{mehc_curation,
  title={MEHC-curation: An Automated Python Framework for High-Quality Molecular Dataset Preparation},
  author={Chinh Pham and Nhat-Anh Nguyen-Dang and Thanh-Hoang Nguyen-Vo and Binh P. Nguyen},
  month={dec},
  year={2025},
  version={1.0.5},
  url={https://github.com/biochem-data-sci/mehc-curation},
  license={MIT},
  doi={10.5281/zenodo.17568725}, 
  publisher={Zenodo}
}

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Support

For issues and questions, please open an issue on GitHub.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mehc_curation-1.0.5.tar.gz (45.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mehc_curation-1.0.5-py3-none-any.whl (56.9 kB view details)

Uploaded Python 3

File details

Details for the file mehc_curation-1.0.5.tar.gz.

File metadata

  • Download URL: mehc_curation-1.0.5.tar.gz
  • Upload date:
  • Size: 45.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for mehc_curation-1.0.5.tar.gz
Algorithm Hash digest
SHA256 add4c3ed3be07b6d4259ee4a9ab3eda4f99474319a4c95577e557060c1ea77b7
MD5 54a2edeaa101787b848989c199d714af
BLAKE2b-256 cfb7270c9df6784f803c1659985d41426f17edaff853a7e4ce76ba4ffa796834

See more details on using hashes here.

File details

Details for the file mehc_curation-1.0.5-py3-none-any.whl.

File metadata

  • Download URL: mehc_curation-1.0.5-py3-none-any.whl
  • Upload date:
  • Size: 56.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for mehc_curation-1.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 8ad8873fec3a0418040861b6bf197b53dc40befe87b6c5aeb28c5c37e1a8f59a
MD5 6f51761dd8929f0b554232157b96907c
BLAKE2b-256 d4d720027fd592d401b833550f4beb782ce71626cb153a365fc7ca43cc7d7c58

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page