Skip to main content

This package implements bias correction methods for models estimated using synthetic data

Project description

ValidMLInference

ValidMLInference is a Python package for correcting bias and performing valid inference in regressions that include variables generated by AI/ML methods. The bias-correction methods are described in Battaglia, Christensen, Hansen & Sacher (2024).

Requirements and installation

ValidMLInference runs on Python 3.8 and requires standard numerical packages: numpy, scipy, jax, jaxopt, and numdifftools.

To install the package, run

pip install ValidMLInference

in your terminal.

Using ValidMLInference

To get started, we recommend looking at the following examples and resources:

  1. Remote Work: This notebook estimates the association between working from home and salaries using real-world job postings data (Hansen et al., 2023). It illustrates how the functions ols_bca, ols_bcm and one_step can be used to correct bias from regressing on AI/ML-generated labels. The notebook reproduces results from Table 1 of Battaglia, Christensen, Hansen & Sacher (2024).
  2. Topic Models: This notebook estimates the association between CEO time allocation and firm performance (Bandiera et al. 2020). It illustrates how the functions ols_bca_topic and ols_bcm_topic can be used to correct bias from estimated topic model shares. The notebook reproduces results from Table 2 of Battaglia, Christensen, Hansen & Sacher (2024).
  3. Synthetic Example: A synthetic example comparing the performance of different bias-correction methods in the context of AI/ML-generated labels.
  4. Functionality: A detailed reference describing all available functions, optional arguments, and usage tips.

Quickstart

Code below compares coefficients obtained by ordinary least squares methods and those obtained by the one_step approach, when used on variables subject to classification error. We can see that the 95% confidence interval generated by one_step contains the true parameter of 2, whereas the standard ols approach doesn't.

import numpy as np
import pandas as pd
from ValidMLInference import ols, one_step

# Set random seed for reproducibility
np.random.seed(42)

# Generate synthetic data with mislabeling
n = 1000
true_effect = 2.0

# True treatment assignment
X_true = np.random.binomial(1, 0.5, n)

# Observed (mislabeled) treatment with 20% error rate
mislabel_prob = 0.2
X_obs = X_true.copy()
mislabel_mask = np.random.binomial(1, mislabel_prob, n).astype(bool)
X_obs[mislabel_mask] = 1 - X_obs[mislabel_mask]

# Generate outcome with true treatment effect
Y = 1.0 + true_effect * X_true + np.random.normal(0, 1, n)

# Create DataFrame
data = pd.DataFrame({'Y': Y, 'X_obs': X_obs})

# Naive OLS using mislabeled data
ols_result = ols(formula="Y ~ X_obs", data=data)
print("OLS Results (using mislabeled data):")
print(ols_result.summary())

# One-step estimator that corrects for mislabeling
one_step_result = one_step(formula="Y ~ X_obs", data=data)
print("\nOne-Step Results (correcting for mislabeling):")
print(one_step_result.summary())

ols_ci = ols_result.summary().loc['X_obs', ['2.5%', '97.5%']]
one_step_ci = one_step_result.summary().loc['X_obs', ['2.5%', '97.5%']]

print(f"\nTrue treatment effect: {true_effect}")
print(f"OLS 95% CI contains true value: {ols_ci['2.5%'] <= true_effect <= ols_ci['97.5%']}")
print(f"One-step 95% CI contains true value: {one_step_ci['2.5%'] <= true_effect <= one_step_ci['97.5%']}")
OLS Results (using mislabeled data):
           Estimate  Std. Error    z value  P>|z|      2.5%     97.5%
Intercept  1.392265    0.055828  24.938313    0.0  1.282843  1.501687
X_obs      1.207589    0.078643  15.355267    0.0  1.053451  1.361727

One-Step Results (correcting for mislabeling):
           Estimate  Std. Error    z value  P>|z|      2.5%     97.5%
X_obs      1.828638    0.108976  16.780127    0.0  1.615048  2.042228
Intercept  1.092510    0.107082  10.202534    0.0  0.882633  1.302387

True treatment effect: 2.0
OLS 95% CI contains true value: False
One-step 95% CI contains true value: True

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

validmlinference-1.0.9.tar.gz (604.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

validmlinference-1.0.9-py3-none-any.whl (607.0 kB view details)

Uploaded Python 3

File details

Details for the file validmlinference-1.0.9.tar.gz.

File metadata

  • Download URL: validmlinference-1.0.9.tar.gz
  • Upload date:
  • Size: 604.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for validmlinference-1.0.9.tar.gz
Algorithm Hash digest
SHA256 5e3024c2f2a3b214137ac1fcf7243ddc01b07c291abc05ca6f499e67203f02c7
MD5 5c65aa04535615ea5da19a83fbea2710
BLAKE2b-256 f7a4829ecb9d422433bf7004382b8868148e299812abf7fc36cf132c6a9a19f5

See more details on using hashes here.

File details

Details for the file validmlinference-1.0.9-py3-none-any.whl.

File metadata

File hashes

Hashes for validmlinference-1.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 f1b0f404d4e7d169cabe9abf5df116285b015b80b52833807edd65de8ee78b94
MD5 c678266ea53aeeba2f340ecf856c8cfa
BLAKE2b-256 a93e800ea239c3c7c3b934259f4eb9cb3ce68f0d5f248c8083dbce5125b064c1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page