Hybrid AI + Econometric library for volatility, risk and macro forecasting.
Project description
hybridecon — A Unified Hybrid AI + Econometric Forecasting Library
Author: Dr Merwan Roudane Email: merwanroudane920@gmail.com GitHub: https://github.com/merwanroudane/hybridecon Version: 0.2.0 License: MIT
hybridecon is a single Python package that implements every hybrid
deep-learning + econometric model surveyed in the 2023–2026 volatility,
risk and macro-forecasting literature, together with the complete
pre-estimation and post-estimation diagnostic battery, model selection
machinery, evaluation metrics and publication-quality plots/tables that
those papers rely on.
The design goal is one consistent API across all hybrids — every model follows the chain
m = SomeHybrid(...).fit(returns, ...)
m.forecast(horizon=h)
m.diagnose().plot().report("md")
so you can swap models in a benchmarking loop without rewriting any boilerplate.
Table of Contents
- Installation
- 60-second quickstart
- Conceptual guide
- 3.1 GARCH family
- 3.2 GARCH-MIDAS
- 3.3 HAR-RV
- 3.4 ARIMA / ARFIMA
- 3.5 VAR / BEKK / DCC
- 3.6 GAS / score-driven
- 3.7 Deep components
- 3.8 Kolmogorov-Arnold Networks
- 3.9 Regime detection
- The 17 hybrid models
- Pre-estimation diagnostics
- Post-estimation diagnostics
- Evaluation metrics & MCS
- Trading & risk metrics
- Visualisation & tables
- Output directory layout
- End-to-end recipes
- API reference
- Troubleshooting
- Citation
- License
1. Installation
From source (development)
git clone https://github.com/merwanroudane/hybridecon.git
cd hybridecon
pip install -e ".[full]"
From PyPI (after publication)
pip install hybridecon # core (pure-Python econometrics + plots)
pip install "hybridecon[deep]" # adds PyTorch for deep components / hybrids
pip install "hybridecon[plot]" # adds seaborn + plotly
pip install "hybridecon[full]" # everything above + JupyterLab + SHAP
Tested on: Python 3.10–3.13 · PyTorch ≥ 2.0 · statsmodels ≥ 0.14 · arch ≥ 6.0 · hmmlearn ≥ 0.3 · scikit-learn ≥ 1.3.
2. 60-second quickstart
from hybridecon.data import load_demo_returns
from hybridecon.hybrids import HARLSTMGARCH
r = load_demo_returns("SPX", n=3000) # synthetic returns Series
m = HARLSTMGARCH().fit(r) # 3-stage cascade
fc = m.forecast(horizon=5) # h=1..5 vol forecasts
m.diagnose().plot() # tests + plots saved to OUTDIR
print(m.report("md")) # markdown report string
OUTDIR defaults to ./hybridecon_outputs/ — override via the
HYBRIDECON_OUTDIR env variable.
A complete walk-through — fit all hybrids, run the Model Confidence Set,
emit the comparison table — lives in
examples/run_full_pipeline.py.
3. Conceptual guide
This section is the theory cheat sheet: every formula the library implements, in the same notation as the API.
3.1 GARCH family
hybridecon.econometric.GARCH wraps the arch package and unifies
GARCH(p,q), EGARCH, GJR-GARCH, APARCH, TARCH, FIGARCH and HARCH under one
constructor.
σ²_t = ω + Σ α_i ε²_{t-i} + Σ γ_j ε²_{t-j} 1[ε_{t-j}<0] + Σ β_k σ²_{t-k}
vol= |
Model | Captures |
|---|---|---|
"GARCH" |
Bollerslev 1986 | symmetric clustering |
"EGARCH" |
Nelson 1991 | asymmetric, no positivity constraint on coefs |
"GJR-GARCH" |
Glosten-Jagannathan-Runkle 1993 | leverage via 1[ε<0] |
"APARCH" |
Ding-Granger-Engle 1993 | flexible power & asymmetry |
"TARCH" |
Zakoian 1994 | absolute-value asymmetry (power=1) |
"FIGARCH" |
Baillie-Bollerslev-Mikkelsen 1996 | long memory in volatility |
"HARCH" |
Müller et al. 1997 | heterogeneous (multi-scale) |
from hybridecon.econometric import GARCH
m = GARCH(vol="GJR-GARCH", p=1, o=1, q=1, dist="t", mean="Constant").fit(r)
m.conditional_volatility # σ̂_t (T,)
m.standardized_residuals # z_t = ε_t / σ̂_t
m.forecast(horizon=10) # σ̂_{T+1..T+10}
m.aic_bic() # {"AIC":..., "BIC":..., "loglik":...}
3.2 GARCH-MIDAS
Engle, Ghysels & Sohn (2013) decompose total variance into a fast short-term GARCH component and a slow MIDAS-driven long-term component fed by lower-frequency macro data.
σ²_t = g_t · τ_t
g_t = (1-α-β) + α (r_{t-1}-μ)² / τ_{t-1} + β g_{t-1}
log τ_t = m + θ Σ_{k=1..K} φ_k(ω) X_{t-k}
φ_k(ω) ∝ (k/K)^{ω₁-1} (1-k/K)^{ω₂-1} (Beta weighting)
from hybridecon.econometric import GARCHMIDAS
gm = GARCHMIDAS(K=12, w_restricted=True).fit(daily_returns, monthly_macro)
gm.tau_, gm.g_ # long & short components
gm.variance_ratio() # share of total variance from τ
3.3 HAR-RV
Corsi (2009) decomposes realised volatility across daily, weekly and monthly horizons:
RV_t = β_0 + β_d RV_{t-1} + β_w mean(RV_{t-5:t-1}) + β_m mean(RV_{t-22:t-1}) + ε_t
from hybridecon.econometric import HAR
from hybridecon.utils import realized_volatility
rv = realized_volatility(r, window=22)
har = HAR().fit(rv)
har.forecast(horizon=5)
3.4 ARIMA / ARFIMA
ARIMAWrapper uses pmdarima.auto_arima if order=None, else
statsmodels.ARIMA(order). ARFIMAWrapper estimates the long-memory
parameter d via the GPH log-periodogram regression, fractionally
differences the series, then fits an ARMA(p, q) on the result.
from hybridecon.econometric import ARIMAWrapper, ARFIMAWrapper
ARIMAWrapper().fit(y).forecast(10) # auto order
ARFIMAWrapper(p=1, q=1).fit(y).forecast(10) # auto d via GPH
3.5 VAR / BEKK / DCC
VARWrapper(lags="aic"|"bic"|"hqic"|int)— VAR(p) fromstatsmodels.DiagonalBEKK— multivariate GARCH of Engle & Kroner (1995):
H_t = CC' + a a' ⊙ ε_{t-1}ε_{t-1}' + b b' ⊙ H_{t-1}
estimated by Gaussian QML; .conditional_correlation() gives the implied
ρ_t surface used in viz.plot_correlation_surface.
3.6 GAS / score-driven
Generalised Autoregressive Score model (Creal-Koopman-Lucas 2013) with Student-t innovations, log-variance dynamics:
f_t = ω + α s_{t-1} + β f_{t-1}
s_t = (1+ν) (r_t-μ)² / ((ν-2) e^{f_t} + (r_t-μ)²) - 1
GAS().fit(r).conditional_volatility returns √(exp f_t).
3.7 Deep components (LSTM, GRU, BiLSTM, CNN, Attention, KAN)
All factories live in hybridecon.deep and return PyTorch nn.Module
objects ready for the generic Trainer. PyTorch is imported lazily so
the rest of the library works without it.
from hybridecon.deep import (make_lstm, make_gru, make_bilstm,
make_cnn_bilstm, make_cnn_bilstm_attention,
make_attention_lstm, make_kan,
Trainer, TrainConfig, pinball_loss_factory)
net = make_attention_lstm(input_size=3, hidden=64, n_heads=4)
tr = Trainer(net, loss="huber", cfg=TrainConfig(epochs=60, patience=10))
tr.fit(X_train, y_train, X_val, y_val)
yhat = tr.predict(X_test)
Trainer features: Adam, ReduceLROnPlateau, gradient clipping, early
stopping with best-state restore, optional CUDA, history_["train"|"val"]
exposed for plotting.
3.8 Kolmogorov-Arnold Networks (KAN)
A two-layer KAN with a compact spline-on-edges parameterisation — each
edge (in_f, out_f) carries a learnable smooth basis applied on a fixed
grid in [-1, 1]:
edge(x) = Σ_g coef[in,out,g] · φ((x - grid[g])⁺) · scale[in,out] + base[in,out]
y = SUM_in edges
Used in KANGARCHMIDAS to learn a non-linear correction of the long-term
component τ_t.
3.9 Regime detection
HMMRegime(n_states=2|3|4)— Gaussian HMM viahmmlearn. States are remapped so0is always the calmest regime.KMeansRegime(n_states=3)— K-means on(|r|, σ̂)features; default labelsbull / choppy / crisis.
Both return a RegimeResult(states, labels, proba, centers, method).
4. The 17 hybrid models
All hybrids subclass HybridBase and expose:
.fit(...) returns self
.forecast(horizon) np.ndarray
.diagnose() returns self (populates .diagnostics_)
.plot(prefix=None) returns self (saves to OUTDIR/figures)
.report(fmt="md") returns str (md / latex / html)
.summary() one-line text
.metrics_ dict of out-of-sample metrics
.history_ dict of training curves (when applicable)
.forecast_, .y_test_ test-set arrays
.residuals_ test-set residuals
| # | Class | Pattern | Reference |
|---|---|---|---|
| 1 | GARCHLSTM |
GARCH(σ_t) → LSTM on (r, σ, z) | Kim-Won 2018 |
| 2 | GARCHGRU |
same with GRU backbone | Michańków, Kwiatkowski, Morajda 2023 |
| 3 | GARCHBiLSTM |
bidirectional variant | — |
| 4 | GARCHCNNBiLSTM |
CNN front-end + BiLSTM | — |
| 5 | GARCHAttentionLSTM |
LSTM + additive or multi-head attention | — |
| 6 | ARIMALSTM |
y = ARIMA(y) + LSTM(residual) (Zhang 2003 additive) | Stempień & Ślepaczuk 2025 |
| 7 | ARFIMALSTM |
long-memory variant of #6 | Stempień & Ślepaczuk 2025 |
| 8 | HARLSTMGARCH |
HAR baseline + LSTM(residual) + GARCH(risk) | Ben Romdhane & Boubaker 2026 |
| 9 | KANGARCHMIDAS |
GARCH-MIDAS components → KAN nonlinearity on log τ_t | Liu et al. 2025 |
| 10 | MIDASCNNBiLSTMAttention |
GARCH-MIDAS components → CNN-BiLSTM-MultiHeadAttention | Zhang, Zhang & Hu 2025 |
| 11 | MIDASLSTM |
Beta-MIDAS aggregation + Pearson selection + LSTM | Du, Ji, Du & Wang 2025 |
| 12 | GASATTLSTM |
GAS state + multi-head attention LSTM | Astudillo et al. 2025 |
| 13 | DeepBEKK |
Diagonal BEKK + per-asset LSTM residual | Wang, Liu, Tran & Wang 2025 |
| 14 | DeepVAR |
VAR baseline + per-equation LSTM residual | Altmeyer, Agusti & Vidal-Quadras Costa 2021 |
| 15 | DeepTVAR |
LSTM emits time-varying VAR(1) coefficients with spectral-radius stability constraint (Ansley-Kohn proxy) | Li & Yuan 2023 |
| 16 | DeepQuantileVaR |
LSTM with pinball loss → conditional VaR; ships with Kupiec-POF + Christoffersen-IND backtest | Chronopoulos, Raftapostolos & Kapetanios 2024 |
| 17 | RegimeAwareLSTMGARCH |
HMM regime → regime-specific LSTM heads on top of GJR-GARCH features; emits regime ribbon | Volatility-Forecasting-Regime-Aware-LSTM-GARCH repo |
4.1 Common syntax
from hybridecon.hybrids import GARCHAttentionLSTM
from hybridecon.deep import TrainConfig
m = GARCHAttentionLSTM(
lookback=20,
hidden=64,
layers=1,
dropout=0.1,
n_heads=4,
garch_kwargs={"vol": "GJR-GARCH", "p": 1, "o": 1, "q": 1, "dist": "t"},
train_cfg=TrainConfig(epochs=80, batch_size=64, lr=1e-3, patience=10),
).fit(r)
m.forecast(horizon=5) # σ̂_{T+1..T+5}
m.diagnose().plot()
report = m.report("md")
4.2 Mixed-frequency hybrids
KANGARCHMIDAS, MIDASCNNBiLSTMAttention, MIDASLSTM need a
low-frequency regressor (monthly / quarterly):
from hybridecon.hybrids import KANGARCHMIDAS
import numpy as np
rng = np.random.default_rng(0)
monthly_macro = rng.normal(0, 1, 60) # 60 months
m = KANGARCHMIDAS(K=12, hidden=16, grid=5).fit(daily_returns, monthly_macro)
For MIDASLSTM the second argument can be a 2-D panel (T_low, p):
from hybridecon.hybrids import MIDASLSTM
panel = np.column_stack([ip_growth, epu, vix, nfci]) # (T_low, 4)
m = MIDASLSTM(K=12, lookback=20, hidden=64, top_k=3).fit(daily_y, panel)
4.3 Multivariate hybrids
DeepBEKK, DeepVAR, DeepTVAR accept a 2-D return matrix or DataFrame
(T, k).
from hybridecon.hybrids import DeepTVAR
import pandas as pd
Y = pd.DataFrame({"gdp": gdp, "cpi": cpi, "ffr": ffr})
m = DeepTVAR(lookback=12, hidden=32).fit(Y)
m.A_test_ # (n_test, k, k) time-varying coefficient surface
m.rho_t_ # spectral radius series (always ≤ ρ_max)
m.forecast(8) # iterated VAR forecast using last A_T
4.4 Quantile / VaR hybrid
from hybridecon.hybrids import DeepQuantileVaR
q = DeepQuantileVaR(alpha=0.05, lookback=20, hidden=64).fit(returns)
q.forecast_ # VaR series on test set
q.diagnostics_["var_backtest"] # Kupiec POF + Christoffersen IND
4.5 Benchmark-and-rank in one call
from hybridecon.hybrids import (run_benchmark, GARCHLSTM, GARCHGRU,
GARCHBiLSTM, GARCHAttentionLSTM, HARLSTMGARCH)
table = run_benchmark(returns,
models=[GARCHLSTM(), GARCHGRU(), GARCHBiLSTM(),
GARCHAttentionLSTM(), HARLSTMGARCH()],
alpha_mcs=0.10)
print(table) # metrics + MCS rank + p-value + in_MCS flag
This auto-saves benchmark_metrics.md, benchmark_mcs.md and a
benchmark_mcs.png heatmap to OUTDIR/tables and OUTDIR/figures.
5. Pre-estimation diagnostics
hybridecon.diagnostics.pre_estimation_battery(x) returns a tidy
DataFrame(name, statistic, pvalue, decision, …) with every test below.
Each test is also exposed as a top-level function returning a TestResult
dataclass.
| Function | Test | Null | Interpretation |
|---|---|---|---|
adf_test |
Augmented Dickey-Fuller | unit root | p<0.05 ⇒ stationary |
kpss_test |
KPSS | stationary | p<0.05 ⇒ unit root |
phillips_perron_test |
Phillips-Perron | unit root | nonparametric variant of ADF |
zivot_andrews_test |
Zivot-Andrews | unit root with one structural break | endogenous break date |
ljungbox_test |
Ljung-Box(Q) | no autocorrelation | white-noise check |
arch_lm_test |
Engle ARCH-LM | no ARCH effects | p<0.05 ⇒ heteroskedasticity |
mcleod_li_test |
McLeod-Li | no nonlinear dependence | autocorrelation in r² |
jarque_bera_test |
Jarque-Bera | normality | skew + kurt |
shapiro_wilk_test |
Shapiro-Wilk | normality | small-N |
anderson_darling_test |
Anderson-Darling | normality | tail-sensitive |
bds_test |
BDS (Brock et al.) | i.i.d. | nonlinearity |
cusum_break_test |
OLS-CUSUM | parameter stability | structural-break flag |
Example:
from hybridecon.diagnostics import pre_estimation_battery
from hybridecon.viz import diagnostic_table
df = pre_estimation_battery(returns)
print(diagnostic_table(df, caption="Pre-estimation diagnostics"))
6. Post-estimation diagnostics
post_estimation_battery(residuals, …) runs Ljung-Box on residuals,
McLeod-Li on squared residuals, JB on standardised residuals.
Forecast-comparison tests are exposed individually:
| Function | Purpose |
|---|---|
diebold_mariano(loss1, loss2, h=1, power=2) |
DM test of equal predictive accuracy |
giacomini_white(loss1, loss2, h=1) |
GW conditional predictive ability |
wilcoxon_loss_diff(loss1, loss2) |
nonparametric paired test |
mincer_zarnowitz(actual, predicted) |
regression-based unbiasedness, R² |
kupiec_pof(hits, alpha) |
VaR coverage (proportion-of-failures) |
christoffersen_independence(hits) |
clustering of VaR breaches |
from hybridecon.diagnostics import diebold_mariano
dm = diebold_mariano(loss_modelA, loss_modelB, h=1, power=2)
print(dm.statistic, dm.pvalue, dm.decision)
7. Evaluation metrics & Model Confidence Set
Point / volatility metrics — hybridecon.evaluate
rmse, mae, mape, smape, medae, qlike
evaluate_point_forecast(y, p) # → dict
evaluate_volatility_forecast(rv, σ̂) # adds QLIKE
Model Confidence Set (Hansen, Lunde, Nason 2011)
from hybridecon.evaluate import model_confidence_set
mcs = model_confidence_set(loss_matrix_DataFrame, alpha=0.10,
B=5000, block=5)
# columns: model, rank, p_value, in_MCS
The implementation uses the stationary block bootstrap to compute the distribution of the t-stat of relative losses against the model average, sequentially eliminating the worst until the equal-predictive-accuracy hypothesis is no longer rejected.
8. Trading & risk metrics
from hybridecon.evaluate import (annualised_return_compound,
annualised_volatility, max_drawdown,
sortino, information_ratio,
adjusted_information_ratio,
trading_metrics, expected_shortfall,
var_backtest_table)
trading_metrics(strategy_returns)
# {"ARC", "ASD", "MD", "IR", "IR*", "Sortino"}
expected_shortfall(returns, var_forecasts, alpha=0.05)
# {"ES_emp", "ES_avg", "n_breaches", "breach_rate", "alpha"}
adjusted_information_ratio = ARC² · sign(ARC) / (ASD · |MD|) —
the Adjusted IR* used in the regime-aware repo.
9. Visualisation & tables
All plotting helpers in hybridecon.viz save into
OUTDIR/figures/<name>.png at 130 DPI on a clean white-grid theme.
| Function | Purpose |
|---|---|
plot_series |
generic time-series |
plot_returns_and_volatility |
2-panel returns + σ̂ |
plot_acf_pacf |
side-by-side ACF / PACF |
plot_qq |
normal Q-Q |
residual_diagnostic_panel |
4-panel: residuals, ACF, ACF², Q-Q |
plot_forecast |
forecast vs actual ± 95 % CI band |
plot_volatility_decomposition |
total, √g, √τ from GARCH-MIDAS |
plot_regime_ribbon |
shaded regime overlay |
plot_training_curve |
train / val loss |
plot_mcs_heatmap |
MCS p-values with ✓ for in-MCS |
plot_correlation_surface |
pairwise time-varying ρ from BEKK/DCC |
Tables (viz.table, viz.diagnostic_table, viz.metrics_table) wrap
tabulate and accept any of "github" | "pipe" | "latex" | "html" | "simple_outline" | "fancy_grid". Pass filename="diag" to also save
to OUTDIR/tables/diag.{md|tex|html}.
10. Output directory layout
hybridecon_outputs/
├── figures/
│ ├── benchmark_mcs.png
│ ├── garch-lstm_training.png
│ ├── garch-lstm_residuals.png
│ ├── garch-lstm_vol.png
│ ├── garch-lstm_forecast.png
│ ├── deep-tvar_rho.png
│ ├── regime-aware-lstm-garch_regime.png
│ └── …
└── tables/
├── pre_diag.md
├── benchmark_metrics.md
├── benchmark_mcs.md
└── garch-lstm_metrics.md
Override the location:
export HYBRIDECON_OUTDIR=/path/to/run_2026_05_02
11. End-to-end recipes
11.1 Full volatility-forecasting pipeline
import hybridecon as he
from hybridecon.data import load_demo_returns
from hybridecon.diagnostics import pre_estimation_battery
from hybridecon.hybrids import (GARCHLSTM, GARCHGRU, GARCHBiLSTM,
GARCHCNNBiLSTM, GARCHAttentionLSTM,
HARLSTMGARCH, GASATTLSTM,
RegimeAwareLSTMGARCH, run_benchmark)
from hybridecon import viz
he.set_seed(42)
r = load_demo_returns("SPX", n=2500)
# 1. Pre-tests
pre = pre_estimation_battery(r)
print(viz.diagnostic_table(pre, caption="Pre-estimation tests"))
# 2. Benchmark all volatility hybrids
table = run_benchmark(r, models=[
GARCHLSTM(), GARCHGRU(), GARCHBiLSTM(),
GARCHCNNBiLSTM(), GARCHAttentionLSTM(),
HARLSTMGARCH(), GASATTLSTM(),
RegimeAwareLSTMGARCH(n_states=2),
])
print(table)
11.2 Mixed-frequency macro forecasting
import numpy as np
from hybridecon.hybrids import MIDASLSTM
from hybridecon.deep import TrainConfig
T = 2500
y = np.cumsum(0.001 + 0.01 * np.random.randn(T)) # daily target
macro = np.random.randn(120, 4) # monthly factors
m = MIDASLSTM(K=12, lookback=20, top_k=2,
train_cfg=TrainConfig(epochs=60)).fit(y, macro)
print(m.metrics_)
m.diagnose().plot()
11.3 Risk-engine: VaR + ES backtest
from hybridecon.hybrids import DeepQuantileVaR
from hybridecon.evaluate import expected_shortfall
q = DeepQuantileVaR(alpha=0.05).fit(returns)
print(q.diagnostics_["var_backtest"]) # Kupiec & Christoffersen
print(expected_shortfall(q.y_test_, q.forecast_, alpha=0.05))
11.4 Multivariate macro VAR with time-varying coefficients
import pandas as pd
from hybridecon.hybrids import DeepTVAR
df = pd.DataFrame({"gdp": gdp, "cpi": cpi, "ffr": ffr})
m = DeepTVAR(lookback=12, hidden=32).fit(df)
m.rho_t_ # stability series ρ(A_t) ≤ 0.99 by construction
m.forecast(horizon=8) # 8-step iterated forecast
m.plot() # saves training curve + ρ_t plot
11.5 Regime-aware volatility with HMM
from hybridecon.hybrids import RegimeAwareLSTMGARCH
m = RegimeAwareLSTMGARCH(n_states=3, lookback=20, hidden=64).fit(returns)
m.states_ # (T,) inferred regime path (0 = calmest)
m.regime_.labels # ["calm","normal","stressed"]
m.diagnose().plot() # auto-emits regime ribbon
11.6 Custom DM comparison between two of your own models
from hybridecon.diagnostics import diebold_mariano
loss_a = (y_test - pred_a) ** 2
loss_b = (y_test - pred_b) ** 2
print(diebold_mariano(loss_a, loss_b, h=1, power=2))
12. API reference
Top-level
import hybridecon as he
he.set_seed(seed=42) # numpy / random / torch
he.OUTDIR # Path object
he.__version__ # "0.2.0"
he.__author__ # "Dr Merwan Roudane"
he.__email__ # "merwanroudane920@gmail.com"
he.__url__ # GitHub URL
Submodules
| Submodule | Public API |
|---|---|
hybridecon.data |
load_csv, load_yahoo, load_demo_returns, load_demo_panel, load_demo_macro, align_mixed_frequency |
hybridecon.utils |
set_seed, OUTDIR, log_returns, realized_volatility, garman_klass_volatility, build_sequences, train_val_test_split, standardize |
hybridecon.diagnostics |
TestResult, adf_test, kpss_test, phillips_perron_test, zivot_andrews_test, ljungbox_test, arch_lm_test, mcleod_li_test, jarque_bera_test, shapiro_wilk_test, anderson_darling_test, bds_test, cusum_break_test, pre_estimation_battery, diebold_mariano, giacomini_white, wilcoxon_loss_diff, mincer_zarnowitz, kupiec_pof, christoffersen_independence, post_estimation_battery |
hybridecon.econometric |
GARCH, GARCHMIDAS, HAR, ARIMAWrapper, ARFIMAWrapper, VARWrapper, DiagonalBEKK, GAS |
hybridecon.deep |
TrainConfig, Trainer, make_lstm, make_gru, make_bilstm, make_cnn_bilstm, make_cnn_bilstm_attention, make_attention_lstm, make_kan, pinball_loss_factory |
hybridecon.regime |
RegimeResult, HMMRegime, KMeansRegime |
hybridecon.evaluate |
rmse, mae, mape, smape, medae, qlike, evaluate_point_forecast, evaluate_volatility_forecast, model_confidence_set, annualised_return_compound, annualised_volatility, max_drawdown, sortino, information_ratio, adjusted_information_ratio, trading_metrics, var_backtest_table, expected_shortfall |
hybridecon.viz |
plot_series, plot_returns_and_volatility, plot_acf_pacf, plot_qq, residual_diagnostic_panel, plot_forecast, plot_volatility_decomposition, plot_regime_ribbon, plot_training_curve, plot_mcs_heatmap, plot_correlation_surface, table, diagnostic_table, metrics_table |
hybridecon.hybrids |
All 17 hybrid classes + HybridBase + run_benchmark |
TrainConfig
@dataclass
class TrainConfig:
epochs: int = 60
batch_size: int = 64
lr: float = 1e-3
weight_decay: float = 0.0
grad_clip: float = 1.0
patience: int = 10
factor: float = 0.5 # ReduceLROnPlateau factor
min_lr: float = 1e-6
verbose: bool = False
13. Troubleshooting
| Symptom | Likely cause / fix |
|---|---|
ValueError: NaN or inf values found in y from arch |
Pass a 1-D Series or array — not the full DataFrame from load_demo_panel (use load_demo_returns instead). |
pmdarima install fails on Python 3.13 |
pmdarima lags newer Python — pin to 3.12 or set order=(p,d,q) explicitly to skip auto-ARIMA. |
hmmlearn install fails on Apple Silicon |
Use the [deep] extra without hmmlearn and rely on KMeansRegime instead of HMMRegime. |
Trainer runs on CPU even with CUDA installed |
Set Trainer(net, device="cuda") explicitly, or check torch.cuda.is_available(). |
| Empty figures directory | The directory is created lazily; ensure you actually called a plotting helper or m.plot(). |
| MCS p-values look unstable for small T | Increase B (default 5000) and block (default 5) in model_confidence_set. |
KPSS InterpolationWarning |
Statistic outside the lookup table — the printed p-value is conservative; the warning itself is harmless. |
14. Citation
If you use hybridecon in academic work, please cite:
Roudane, M. (2026). hybridecon: A unified Python library for hybrid AI– econometric volatility, risk and macro forecasting. Version 0.2.0. https://github.com/merwanroudane/hybridecon
BibTeX:
@software{roudane2026hybridecon,
author = {Merwan Roudane},
title = {hybridecon: A unified Python library for hybrid AI--econometric
volatility, risk and macro forecasting},
year = {2026},
version = {0.2.0},
url = {https://github.com/merwanroudane/hybridecon}
}
15. License
MIT License — see LICENSE. Copyright © 2026 Dr Merwan Roudane.
For questions, bug reports or feature requests, please open an issue on GitHub or email merwanroudane920@gmail.com.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hybridecon-0.2.0.tar.gz.
File metadata
- Download URL: hybridecon-0.2.0.tar.gz
- Upload date:
- Size: 56.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
276759b73fdaeff4ce8f3e9bff084f8fb5197d58d82b7eacd088824c27417533
|
|
| MD5 |
f84fd7e2fb2af10471b615818477afd7
|
|
| BLAKE2b-256 |
7879be5515f331947ba5d13b3f527c41cec3b54016b61264902083e89a0f8f83
|
File details
Details for the file hybridecon-0.2.0-py3-none-any.whl.
File metadata
- Download URL: hybridecon-0.2.0-py3-none-any.whl
- Upload date:
- Size: 46.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3ea65647b17f1abce98113e95644f51237e712dc89c408f5dd7c7e4ac9b63a46
|
|
| MD5 |
c9f2d6c4bc8d9383a20bd6ee37caac84
|
|
| BLAKE2b-256 |
bc52382e1d7ba2f0e4887b4d0f40eaf42bfabebcd27647b0af950b3a1e5af144
|