Skip to main content

Comprehensive diagnostic and evaluation framework for quantitative finance ML workflows

Project description

ml4t-diagnostic

Python 3.11+ PyPI License: MIT

Statistical validation and diagnostics for quantitative trading strategies: signal analysis, backtest evaluation, and overfitting detection.

Part of the ML4T Library Ecosystem

This library is one of five interconnected libraries supporting the machine learning for trading workflow described in Machine Learning for Trading:

ML4T Library Ecosystem

Each library addresses a distinct stage: data infrastructure, feature engineering, signal evaluation, strategy backtesting, and live deployment.

What This Library Does

Evaluating whether a signal or strategy has genuine predictive power requires statistical rigor. ml4t-diagnostic provides:

  • Information coefficient (IC) analysis with HAC-adjusted standard errors
  • Deflated Sharpe Ratio (DSR) and other multiple-testing corrections
  • Combinatorial purged cross-validation (CPCV) for time series
  • Feature importance analysis (MDI, PFI, MDA, SHAP)
  • Trade-level diagnostics with SHAP-based error pattern discovery
  • Portfolio performance metrics and tear sheets

The library implements methods from the academic finance literature, particularly those addressing backtest overfitting and false discovery in strategy research.

ml4t-diagnostic Architecture

Installation

pip install ml4t-diagnostic

Optional dependencies:

pip install ml4t-diagnostic[ml]   # SHAP, importance analysis
pip install ml4t-diagnostic[viz]  # Plotly visualizations
pip install ml4t-diagnostic[all]  # Everything

Quick Start

Signal Analysis

from ml4t.diagnostic import analyze_signal

result = analyze_signal(
    factor=factor_data,  # date, asset, factor
    prices=price_data,   # date, asset, price
    periods=(1, 5, 21),
)

print(f"IC (1D): {result.ic['1D']:.4f}")
print(f"IC t-stat (1D): {result.ic_t_stat['1D']:.2f}")
print(f"Q5-Q1 spread (1D): {result.spread['1D']:.2%}")

Deflated Sharpe Ratio

from ml4t.diagnostic.evaluation.stats import deflated_sharpe_ratio

# Accounts for multiple testing
dsr_result = deflated_sharpe_ratio(
    returns=strategy_returns,
    benchmark_sharpe=0.0,
    n_trials=100,
)

print(f"Sharpe: {dsr_result.sharpe_ratio:.2f}")
print(f"Deflated Sharpe: {dsr_result.deflated_sharpe:.2f}")
print(f"Significant: {dsr_result.is_significant}")

Feature Importance

from ml4t.diagnostic.evaluation import analyze_ml_importance

# Combines MDI, PFI, MDA, SHAP methods
results = analyze_ml_importance(model, X, y)
print(results.consensus_ranking)

Trade Diagnostics

from ml4t.diagnostic.evaluation import TradeAnalysis, TradeShapAnalyzer

analyzer = TradeAnalysis(trade_records)
worst_trades = analyzer.worst_trades(n=20)

# SHAP-based error pattern discovery
shap_analyzer = TradeShapAnalyzer(model, features_df, shap_values)
result = shap_analyzer.explain_worst_trades(worst_trades)

for pattern in result.error_patterns:
    print(f"Pattern: {pattern.hypothesis}")
    print(f"Potential savings: ${pattern.potential_impact:,.2f}")

Diagnostic Framework

Tier 1: Feature Analysis (Pre-Modeling)
├── Time series diagnostics (stationarity, ACF, volatility)
├── Distribution analysis (moments, normality, tails)
├── Feature importance (MDI, PFI, MDA, SHAP)
└── Feature interactions (conditional IC, H-stat)

Tier 2: Signal Analysis (Model Outputs)
├── IC analysis (time series, histogram, decay)
├── Quantile returns (spreads, monotonicity)
├── Turnover analysis
└── Multi-signal comparison

Tier 3: Backtest Analysis (Post-Modeling)
├── Trade analysis (win/loss, holding periods)
├── Statistical validity (DSR, RAS, PBO)
├── Trade-SHAP diagnostics
└── Excursion analysis (TP/SL optimization)

Tier 4: Portfolio Analysis (Production)
├── Performance metrics (Sharpe, Sortino, Calmar)
├── Drawdown analysis
├── Rolling metrics
└── Risk metrics (VaR, CVaR)

Statistical Methods

Method Purpose
DSR (Deflated Sharpe) Corrects for multiple testing bias
CPCV (Combinatorial Purged CV) Leak-free time series validation
RAS (Rademacher Anti-Serum) Backtest overfitting detection
PBO Probability of backtest overfitting
HAC-adjusted IC Autocorrelation-robust information coefficient
FDR Control Multiple comparisons (Benjamini-Hochberg)

Cross-Validation

from ml4t.diagnostic.splitters import WalkForwardCV, CombinatorialCV
from ml4t.diagnostic.visualization import plot_cv_folds

# Walk-forward with purging
cv = WalkForwardCV(n_splits=5, train_size=252, test_size=63, purge_days=21)

# Visualize fold structure
fig = plot_cv_folds(cv, dates)
fig.show()

Technical Characteristics

  • Polars-based: Native Polars DataFrames throughout
  • HAC standard errors: Newey-West adjustment for autocorrelated data
  • Time-aware validation: Purged and embargoed cross-validation splits

Related Libraries

  • ml4t-data: Market data acquisition and storage
  • ml4t-engineer: Feature engineering and technical indicators
  • ml4t-backtest: Event-driven backtesting
  • ml4t-live: Live trading with broker integration

Development

git clone https://github.com/applied-ai/ml4t-diagnostic.git
cd ml4t-diagnostic
uv sync
uv run pytest tests/ -q -n auto
uv run ty check

References

  • Lopez de Prado, M. (2018). Advances in Financial Machine Learning. Wiley.
  • Bailey, D., & Lopez de Prado, M. (2012). "The Sharpe Ratio Efficient Frontier."
  • Bailey, D., et al. (2014). "The Deflated Sharpe Ratio."

License

MIT License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ml4t_diagnostic-0.1.0a10.tar.gz (1.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ml4t_diagnostic-0.1.0a10-py3-none-any.whl (775.0 kB view details)

Uploaded Python 3

File details

Details for the file ml4t_diagnostic-0.1.0a10.tar.gz.

File metadata

  • Download URL: ml4t_diagnostic-0.1.0a10.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ml4t_diagnostic-0.1.0a10.tar.gz
Algorithm Hash digest
SHA256 8f99af64125ad6ad65bc7307272ba068a8167ac81cdf4c502444a9d2a19d34f7
MD5 ca3cbbfac1ece814151178315bfda340
BLAKE2b-256 b76139054603267f303b5a011d299391aebf6a1db1cbc7126add5ed65d53fc89

See more details on using hashes here.

Provenance

The following attestation bundles were made for ml4t_diagnostic-0.1.0a10.tar.gz:

Publisher: release.yml on ml4t/diagnostic

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ml4t_diagnostic-0.1.0a10-py3-none-any.whl.

File metadata

File hashes

Hashes for ml4t_diagnostic-0.1.0a10-py3-none-any.whl
Algorithm Hash digest
SHA256 e0cd6ecae27136cc7e192096e0c3e031104d97c183527af43f5e0698f57cf2fe
MD5 009de7da5f5c675f90bb938b5eb6f061
BLAKE2b-256 c543791d6ee0899f5995e93754189ea61259e5b458beaf444145098c98981af7

See more details on using hashes here.

Provenance

The following attestation bundles were made for ml4t_diagnostic-0.1.0a10-py3-none-any.whl:

Publisher: release.yml on ml4t/diagnostic

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page