Skip to main content

Solves Data Lineage Blindness by tracking granular preprocessing steps unlike standard experiment trackers.

Project description

ML Audit

Solves Data Lineage Blindness by tracking granular preprocessing steps.

ml-audit is a lightweight Python library designed to bring transparency and reproducibility to data preprocessing. Unlike standard experiment trackers that treat preprocessing as a black box, this library records every granular transformation applied to your pandas DataFrame.

Why ML Audit?

It solves "Data Lineage Blindness".

Most data science teams suffer from a gap in their experiment tracking:

  • MLflow/W&B track metrics (accuracy, loss) and hyperparameters. They often treat the cleaned dataset as a static artifact.
  • DVC tracks file versions. It tells you that the data changed from Version A to Version B.
  • ML Audit tells you why and how it changed. It logs: "Imputed column 'Age' with mean (42.5), then Scaled with StandardScaler, then OneHotEncoded 'Gender'."

Features

  • Full Audit Trail: Automatically logs every step (Imputation, Scaling, Encoding, etc.) into a JSON audit file.
  • Reproducibility: Verify if your data pipeline produces the exact same result every time using hash validation.
  • Visualization: Auto-generates an interactive HTML timeline of your preprocessing steps.
  • Comprehensive Operations:
    • Imputation: mean, median, mode, constant, ffill, bfill.
    • Scaling: minmax, standard, robust, maxabs.
    • Encoding: onehot, label, target encoding.
    • Balancing: smote (via imblearn), oversample, undersample.
    • Transformation: log, sqrt, boxcox.
    • Date Extraction: Extract year, month, day from timestamps.
  • Multi-Column Support: Apply operations to lists of columns efficiently.
  • Generic Support: Track any arbitrary pandas method (e.g., dropna, rename).

Installation

You can install ml-audit via pip:

pip install ml-audit

For SMOTE balancing support, install with the balance extra:

pip install ml-audit[balance]

Quick Start

1. Initialize the Recorder

import pandas as pd
from ml_audit import AuditTrialRecorder

# Load your data
df = pd.read_csv("data.csv")

# Initialize the auditor wrapped around your dataframe
auditor = AuditTrialRecorder(df, name="experiment_v1")

2. Apply Preprocessing

Chain methods fluently. Operations are applied immediately to auditor.current_df.

auditor.filter_rows("age", ">=", 18) \
       .impute(["salary", "score"], strategy='median') \
       .scale(["salary", "age"], method='minmax') \
       .encode("gender", method='onehot') \
       .balance_classes("churn", strategy='oversample') # Handles imbalanced data

3. Access Data

processed_df = auditor.current_df
print(processed_df.head())

4. Export & Visualize

Save the audit trail. This will generate a JSON file (audit_trails/) and an HTML visualization (visualizations/).

auditor.export_audit_trail("audit.json")
# Output:
# - audit_trails/audit.json
# - visualizations/audit.html

Detailed Documentation

Multi-Column Operations

All major preprocessing methods accept either a single string or a list of strings for column names.

# Scale multiple columns at once
auditor.scale(["height", "weight", "bmi"], method='standard')

Generic Pandas Tracking

For operations not natively built-in, use track_pandas to record any DataFrame method.

# Track a rename operation
auditor.track_pandas("rename", columns={"old_name": "new_name"})

# Track dropping NaNs
auditor.track_pandas("dropna", subset=["critical_col"])

Reproducibility Check

Verify that replaying your logs produces the exact same data hash as the current state.

if auditor.verify_reproducibility():
    print("Pipeline is scientifically reproducible!")
else:
    print("Pipeline result mismatch!")

Visualization

Open the generated HTML file in visualizations/ to see a timeline like this:

  • Step 1: Load Data (Shape: 1000x5)
  • Step 2: Impute (salary -> median)
  • Step 3: Scale (age -> minmax)

License

MIT License. Free to use for personal and commercial projects.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ml_audit-0.1.2.tar.gz (16.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ml_audit-0.1.2-py3-none-any.whl (14.9 kB view details)

Uploaded Python 3

File details

Details for the file ml_audit-0.1.2.tar.gz.

File metadata

  • Download URL: ml_audit-0.1.2.tar.gz
  • Upload date:
  • Size: 16.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for ml_audit-0.1.2.tar.gz
Algorithm Hash digest
SHA256 b2bf1ab786435f6d64f65f77414f5e76a90f6a692e4c40282190df2ccb00dd40
MD5 9c86bb18cbb09aa20ef652045b45511f
BLAKE2b-256 5e04412adc5d9f6ab93f22549c87c5c722b75794bbba4bfbc424de1db5ceaa9b

See more details on using hashes here.

File details

Details for the file ml_audit-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: ml_audit-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 14.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for ml_audit-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 0afd117bed504f41603bc0537f238db9ad3fcc31ccb7e6c8bb279e80bb91697e
MD5 a5658bff981286ac94eafc4fa2ace178
BLAKE2b-256 c86529a931c236c92639f7f9593e1c191935ffe53a8454ea39bcb4e3d392c02b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page