Hybrid econometrics x deep learning library: ARDL-MIDAS-DNN, DeepTVAR, DeepVARwT, Neural Granger Causality with full pre/post tests, diagnostics, plots and tables.
Project description
hybridmetrics
A unified Python library for hybrid econometrics × deep learning models.
ARDL · MIDAS · ARDL-MIDAS · ARDL-MIDAS-DNN · ARDL-MIDAS-Transformer · DeepTVAR · DeepVARwT · Neural Granger Causality with a complete pre/post-test battery, residual & stability diagnostics, publication-grade plots and tabulated reports.
| Author | Dr. Merwan Roudane |
| merwanroudane920@gmail.com | |
| GitHub | https://github.com/merwanroudane/hybridmetrics |
| Version | 0.1.0 |
| License | MIT |
Table of contents
- Why this library
- What it implements
- Installation
- Quick start
- Pre-estimation tests
- Post-estimation diagnostics
- ARDL
- MIDAS
- ARDL-MIDAS
- ARDL-MIDAS-DNN (hybrid)
- ARDL-MIDAS-Transformer (hybrid)
- DeepTVAR
- DeepVARwT
- Neural Granger Causality
- Plots
- Tables
- Diagnostics aggregators
- Synthetic data utilities
- End-to-end workflows
- Citation
- References
1. Why this library
Three recent strands of literature merge classical econometrics with deep learning to handle non-stationary, mixed-frequency, time-varying or weakly-identified time series:
- Mixed-frequency hybrid regression (ARDL-MIDAS + DNN) — Roudane et al. (2024).
- Time-varying VAR with neural networks (DeepTVAR & DeepVARwT) — Wang & Yuan (2023, 2024).
- Neural Granger causality (cMLP, cLSTM) — Tank et al. (2018).
Each comes with its own reference repository in a different language and scope. hybridmetrics packages all of them in a single, opinionated Python library with a consistent .fit() / .predict() / .summary() API and a complete econometric workflow on the side: stationarity, cointegration, bounds, Granger, residual diagnostics, recursive stability, and beautiful plots / tables.
2. What it implements
| Family | Class | Pre-tests | Post-tests | Plots | Forecast |
|---|---|---|---|---|---|
| Single-frequency | ARDL |
ADF, KPSS, PP, ZA, EG, PO, PSS bounds, Granger | LB, BG, BP, ARCH-LM, JB, DW, RESET, CUSUM | residuals, ACF/PACF, IRF-style coef | ✓ |
| Mixed-frequency | MIDAS, ARDLMIDAS |
as above | as above | weighting kernel, IC grid | ✓ |
| Hybrid | ARDLMIDASDNN |
as above | as above + DNN diagnostics | weighting kernel, loss curve | ✓ |
| Hybrid (Transformer) | ARDLMIDASTransformer |
as above | as above + attention rollout | weighting kernel, loss, attention map | ✓ |
| Multivariate | DeepTVAR, DeepVARwT |
unit-root, Johansen | residual diagnostics | TV-coefficients, trend, training NLL | ✓ |
| Causal discovery | NeuralGrangerCausality |
unit-root | n/a | causality heatmap, training loss | n/a |
All tests return typed dictionaries with statistic, p-value, critical values, lags, and a verbal conclusion, and there are tidy pd.DataFrame aggregators (run_unit_root_battery, residual_diagnostics).
3. Installation
3.1 Local install from this folder
cd /path/to/hybridmetrics
pip install -e .
3.2 With deep-learning dependencies
pip install -e ".[deep]"
The deep extra pulls PyTorch ≥ 1.13, which is required only by ARDLMIDASDNN, DeepTVAR, DeepVARwT, and NeuralGrangerCausality. The classical models work without it.
3.3 Optional packages
| Package | Used for |
|---|---|
arch |
Phillips-Perron and Phillips-Ouliaris cointegration tests |
pytest |
running the test suite |
pip install arch pytest
3.4 Sanity check
import hybridmetrics as hm
hm.about()
prints the package banner, version, author, and recommended citation.
4. Quick start
import hybridmetrics as hm
from hybridmetrics import data, plots, tables
# 1. Simulate a small ARDL DGP
df = data.make_ardl(T=300, beta=(0.5, 0.3, -0.2), alpha=(0.6,))
# 2. Pre-estimation: full unit-root battery on every column
print(tables.report_unit_root_battery(hm.run_unit_root_battery(df)))
# 3. PSS bounds test for cointegration
print(tables.report_bounds_test(hm.bounds_test(df["y"], df[["x"]])))
# 4. Fit ARDL(1,1) with constant
m = hm.ARDL(p=1, q=1, trend="c").fit(df["y"], df[["x"]])
print(tables.report_coefficients(m.coef_table()))
# 5. Post-estimation residual diagnostics
diag = hm.residual_diagnostics(m.resid, exog=df[["x"]].iloc[1:], model_results=m.results_)
print(tables.report_residual_diagnostics(diag))
# 6. Forecast and CUSUM stability plot
fig1 = plots.plot_forecast(df["y"], y_pred=m.forecast(steps=12).values)
fig2 = plots.plot_cusum(hm.cusum_cusumsq(m.resid))
fig1.savefig("forecast.png"); fig2.savefig("cusum.png")
The same data → pre-tests → fit → post-tests → plots/tables flow applies to every model in the library.
5. Pre-estimation tests
Module: hybridmetrics.tests_pre (re-exported at top level).
5.1 ADF — Augmented Dickey-Fuller
adf_test(y, regression="c", maxlag=None, autolag="AIC", alpha=0.05) -> dict
H0: unit root. regression ∈ {"n", "c", "ct", "ctt"}. Returns {stat, pvalue, lags, nobs, crit, conclusion}.
5.2 KPSS
kpss_test(y, regression="c", nlags="auto", alpha=0.05) -> dict
H0: stationarity (level if c, trend if ct). Returns the same field set.
5.3 Phillips-Perron
pp_test(y, regression="c", lags=None, alpha=0.05) -> dict
H0: unit root. Requires the arch package; otherwise returns a sentinel {"error": ...}.
5.4 Zivot-Andrews (one structural break)
zivot_andrews_test(y, regression="c", maxlag=None, alpha=0.05) -> dict
Returns the additional break_idx of the most likely break point.
5.5 Engle-Granger cointegration
engle_granger_test(y, x, trend="c", method="aeg", maxlag=None, alpha=0.05) -> dict
5.6 Phillips-Ouliaris cointegration
phillips_ouliaris_test(y, x, alpha=0.05) -> dict
Single-equation cointegration test based on residual unit-root behaviour. Requires arch.
5.7 Johansen multivariate cointegration
johansen_test(data, det_order=0, k_ar_diff=1, alpha=0.05) -> dict
Returns trace and max-eigenvalue statistics, critical values, and the implied cointegration rank.
5.8 Pesaran-Shin-Smith F-bounds test
bounds_test(y, x, p=1, q=1, alpha=0.05) -> dict
Case III (unrestricted intercept, no trend) PSS bounds test. Returns the F statistic and the I(0)/I(1) critical bands at 1/5/10%, plus a decision (cointegration / no cointegration / inconclusive).
5.9 Granger causality
granger_causality_test(data, target, source, maxlag=4, alpha=0.05) -> dict
Returns a pd.DataFrame indexed by lag with F, χ², their p-values and a 5% rejection flag.
5.10 Aggregator: full unit-root battery
run_unit_root_battery(data, alpha=0.05) -> pd.DataFrame
Applies ADF, PP and KPSS to every column of data. Pretty-print with tables.report_unit_root_battery.
6. Post-estimation diagnostics
Module: hybridmetrics.tests_post.
| Function | Test | H0 |
|---|---|---|
ljung_box(resid, lags=10) |
Ljung-Box Q | white noise |
breusch_godfrey(model_results, lags=4) |
BG LM | no autocorrelation |
breusch_pagan(resid, exog) |
BP LM | homoskedasticity |
arch_lm(resid, lags=4) |
Engle ARCH-LM | no ARCH |
jarque_bera(resid) |
JB | normality |
durbin_watson(resid) |
DW | no AR(1) |
reset_test(model_results, power=3) |
Ramsey RESET | correct functional form |
cusum_cusumsq(resid) |
recursive paths | parameter stability |
Aggregator
residual_diagnostics(resid, exog=None, lags=10, model_results=None) -> pd.DataFrame
Runs the full battery in one call. Pass model_results= (a fitted statsmodels results object) to also get BG and RESET tests.
7. ARDL
class hybridmetrics.ARDL(p=1, q=1, trend="c")
Single-equation Autoregressive Distributed Lag model. Wraps statsmodels.tsa.ardl.ARDL with extras for bounds testing, long-run multipliers, and tidy coefficient tables.
Constructor
| Arg | Default | Notes |
|---|---|---|
p |
1 |
Lag order on y. |
q |
1 (or tuple) |
Lag order on each regressor. |
trend |
"c" |
One of "n", "c", "ct", "ctt". |
Methods
ARDL.select_order(y, x, maxp=4, maxq=4, trend="c", ic="aic") -> dict # static method
ARDL.fit(y, x) -> self
ARDL.summary() -> str
ARDL.coef_table() -> pd.DataFrame # coef, std_err, t, p, CI
ARDL.long_run_multipliers() -> pd.Series
ARDL.forecast(steps=10, exog_oos=None) -> np.ndarray
ARDL.bounds_test(alpha=0.05) -> dict
Attributes
| Name | Type | Description |
|---|---|---|
results_ |
statsmodels | underlying fitted result |
resid |
pd.Series | residuals |
fittedvalues |
pd.Series | in-sample fit |
Example
sel = hm.ARDL.select_order(df["y"], df[["x"]], maxp=4, maxq=4, ic="aic")
m = hm.ARDL(p=sel["p"], q=sel["q"][0], trend="c").fit(df["y"], df[["x"]])
print(m.summary())
print(m.long_run_multipliers())
print(tables.report_bounds_test(m.bounds_test()))
8. MIDAS
class hybridmetrics.MIDAS(weighting="beta", poly_order=2, ar=1, include_constant=True)
Mixed-frequency single-equation regression. The high-frequency regressor matrix is shaped (T_low, m) where each row holds the m high-frequency observations of one low-frequency period.
Weighting schemes
weighting |
Description | Parameters |
|---|---|---|
"unrestricted" |
U-MIDAS, one coefficient per HF lag | m |
"almon" |
Exponential Almon polynomial | poly_order |
"beta" |
Two-parameter beta function | 2 |
Methods
MIDAS.fit(y, X_high) -> self # X_high shape (T_low, m)
MIDAS.fittedvalues -> pd.Series
MIDAS.resid -> pd.Series
MIDAS.weights_ -> np.ndarray | None # estimated MIDAS kernel
MIDAS.info_criteria() -> dict # AIC / BIC / HQIC / loglik
MIDAS.summary() -> str
Example
y_low, x_high = data.make_mixed_frequency(T_low=120, m=4)
m = hm.MIDAS(weighting="beta", ar=1).fit(y_low, x_high)
print(m.summary())
plots.plot_midas_weights(m.weights_).savefig("midas_kernel.png")
9. ARDL-MIDAS
class hybridmetrics.ARDLMIDAS(p=1, weighting="beta", poly_order=2, include_constant=True)
ARDL skeleton on the low-frequency target plus one or more MIDAS-weighted high-frequency blocks. Pass high_blocks as a list of (T_low, m_k) arrays.
m = hm.ARDLMIDAS(p=2, weighting="beta").fit(y_low, [x_high_1, x_high_2])
print(m.summary())
print(m.info_criteria())
10. ARDL-MIDAS-DNN (hybrid)
class hybridmetrics.ARDLMIDASDNN(
p=1, weighting="beta", poly_order=2,
hidden=(32, 16), dropout=0.1, lr=1e-3,
epochs=400, batch_size=32, patience=30, weight_decay=1e-4,
device="cpu", seed=0,
)
Two-stage hybrid à la Zhang for mixed-frequency data:
- Linear stage —
ARDLMIDAScaptures cointegration / long-run dynamics. - Nonlinear stage — a small dense network (
hidden) consumes the high-frequency block + AR lags and predicts the residuals of the linear stage. - The final forecast is the sum of the two predictions.
Methods
ARDLMIDASDNN.fit(y, high_blocks) -> self
ARDLMIDASDNN.predict(high_blocks_new, y_history) -> np.ndarray
ARDLMIDASDNN.fittedvalues -> pd.Series
ARDLMIDASDNN.resid -> pd.Series # final hybrid residuals
ARDLMIDASDNN.linear_residuals -> pd.Series # residuals of the linear stage
ARDLMIDASDNN.info_criteria() -> dict
ARDLMIDASDNN.summary() -> str
Example
y, X_high_block = data.make_mixed_frequency(T_low=200, m=3)
hyb = hm.ARDLMIDASDNN(p=1, weighting="beta", hidden=(32, 16), epochs=300)
hyb.fit(y, [X_high_block.values])
print(hyb.summary())
print(tables.report_info_criteria(hyb.info_criteria()))
plots.plot_residual_diagnostics(hyb.resid).savefig("hybrid_resid.png")
11. ARDL-MIDAS-Transformer (hybrid)
class hybridmetrics.ARDLMIDASTransformer(
p=1, weighting="beta", poly_order=2,
d_model=32, nhead=4, num_layers=2, dim_feedforward=64,
dropout=0.1, lr=5e-4,
epochs=400, batch_size=32, patience=30, weight_decay=1e-4,
device="cpu", seed=0,
)
Python re-implementation of Chalkiadakis, Peters & Ames (2023) — Hybrid ARDL-MIDAS-Transformer Time-Series Regressions for Multi-Topic Crypto Market Sentiment Driven by Price and Technology Factors (Digital Finance 5, 295-365).
The architecture mirrors the DNN variant but replaces the dense residual network with a multi-head Transformer encoder:
- Linear stage —
ARDLMIDASfor cointegration / long-run dynamics. - Transformer stage — every high-frequency lag and every AR-lag is
embedded as a token of a sequence; sinusoidal positional encodings are
added; a stack of
num_layersTransformerEncoderLayers withnheadattention heads encodes the sequence; a regression head predicts the linear-stage residuals. - The two predictions are summed.
Constructor arguments
| Arg | Default | Description |
|---|---|---|
p |
1 | AR order on the low-frequency target. |
weighting |
"beta" |
MIDAS scheme: "beta" / "almon" / "unrestricted". |
poly_order |
2 | Order of the Almon polynomial when weighting="almon". |
d_model |
32 | Token embedding dimension. |
nhead |
4 | Number of attention heads (must divide d_model). |
num_layers |
2 | Number of stacked encoder blocks. |
dim_feedforward |
64 | Hidden width of the position-wise FFN. |
dropout |
0.1 | Dropout rate for attention + FFN + head. |
lr |
5e-4 | AdamW learning rate. |
epochs |
400 | Maximum number of training epochs. |
patience |
30 | Early-stopping patience on the held-out fold. |
weight_decay |
1e-4 | AdamW weight decay. |
device |
"cpu" |
"cpu" or "cuda". |
seed |
0 | Reproducibility seed. |
Methods
ARDLMIDASTransformer.fit(y, high_blocks) -> self
ARDLMIDASTransformer.predict(high_blocks_new, y_history) -> np.ndarray
ARDLMIDASTransformer.fittedvalues -> pd.Series
ARDLMIDASTransformer.resid -> pd.Series # final hybrid residuals
ARDLMIDASTransformer.linear_residuals -> pd.Series # linear-stage residuals
ARDLMIDASTransformer.attention_rollout(X) -> np.ndarray # (num_layers, seq, seq)
ARDLMIDASTransformer.loss_history() -> pd.Series
ARDLMIDASTransformer.info_criteria() -> dict
ARDLMIDASTransformer.summary() -> str
Example
import numpy as np
from hybridmetrics import ARDLMIDASTransformer, data, plots, tables
y, X = data.make_mixed_frequency(T_low=200, m=4)
m = ARDLMIDASTransformer(
p=1, weighting="beta",
d_model=32, nhead=4, num_layers=2,
epochs=200, patience=30,
).fit(y, [X.values])
print(m.summary())
# Visualise self-attention from the final encoder layer
idx = np.where(m._valid)[0]
ar = np.column_stack([y.values[idx-k] for k in range(1, m.p+1)])
Xfeat = np.hstack([X.values[m._valid], ar])
attn = m.attention_rollout(Xfeat)
plots.plot_attention(attn, layer=-1).savefig("attention.png")
The attention heatmap shows which (high-frequency) lag tokens drive the residual prediction — a direct interpretability tool that is unique to the Transformer variant.
12. DeepTVAR
class hybridmetrics.DeepTVAR(
p=2, hidden=32, num_layers=1,
lr=5e-3, epochs=400, weight_decay=1e-4,
device="cpu", seed=0,
)
Time-varying VAR(p) where an LSTM emits, at every step t, the vectorised coefficient matrices A_1(t), ..., A_p(t) and the Cholesky factor of the innovation covariance Σ(t). Trained end-to-end by minimising the Gaussian negative log-likelihood.
Methods
DeepTVAR.fit(data, verbose=False) -> self
DeepTVAR.fitted() -> pd.DataFrame
DeepTVAR.resid() -> pd.DataFrame
DeepTVAR.forecast(h=10) -> pd.DataFrame
DeepTVAR.loss_history() -> pd.Series
Attributes
| Name | Shape | Description |
|---|---|---|
coefs_ / time_varying_coefficients |
(T, p, k, k) |
TV-VAR coefficients |
chol_ |
(T, k, k) |
TV Cholesky factors |
time_varying_sigma |
(T, k, k) |
TV innovation covariance |
Example
df = data.make_var(T=400, k=3, p=2)
m = hm.DeepTVAR(p=2, hidden=32, epochs=300).fit(df)
plots.plot_tv_coefficients(m.coefs_, var_names=df.columns, lag=0).savefig("tvar_lag1.png")
plots.plot_loss_history(m.loss_history()).savefig("tvar_loss.png")
print(m.forecast(h=10))
13. DeepVARwT
class hybridmetrics.DeepVARwT(
p=2, hidden=32, num_layers=1,
lr=5e-3, epochs=600, weight_decay=1e-4,
device="cpu", seed=0,
)
Deep VAR with deterministic trend:
y_t = μ_t + Σ_{i=1}^p A_i (y_{t-i} - μ_{t-i}) + ε_t, ε_t ~ N(0, Σ).
The trend μ_t is generated by an LSTM driven by a normalised time index. The VAR coefficients are constant across time but constrained to be causal via Whittle's algorithm on partial-correlation matrices (the same trick as the original repository's check_causality.py).
Methods
DeepVARwT.fit(data, verbose=False) -> self
DeepVARwT.fitted() / resid() -> pd.DataFrame
DeepVARwT.forecast(h=10) -> pd.DataFrame # extrapolates the trend
DeepVARwT.loss_history() -> pd.Series
Attributes
| Name | Shape | Description |
|---|---|---|
A_ |
(p, k, k) |
causal VAR coefficient matrices |
trend_ |
(T, k) |
fitted deterministic trend |
Sigma_ / chol_ |
(k, k) |
innovation covariance / its Cholesky |
14. Neural Granger Causality
class hybridmetrics.NeuralGrangerCausality(
method="cmlp", # or "clstm"
lag=5, hidden=32, num_layers=1,
lam=0.1, lam_ridge=1e-4,
lr=1e-3, epochs=800,
device="cpu", seed=0,
)
Component-wise neural network for Granger-causal discovery (Tank et al., 2018). For each target series y_i an independent network maps the lagged inputs to y_i. A group-lasso proximal step on the input-to-hidden weights of every column drives entire columns to zero — surviving columns mark Granger-causal links.
Methods
NeuralGrangerCausality.fit(data, verbose=False) -> self
NeuralGrangerCausality.causality_matrix(threshold=0.0) -> pd.DataFrame # k x k
NeuralGrangerCausality.causality_pairs(threshold=1e-3) -> pd.DataFrame # tidy long form
Example
df = data.make_var(T=500, k=4, p=2)
gc = hm.NeuralGrangerCausality(method="cmlp", lag=5, lam=0.05, epochs=400).fit(df)
M = gc.causality_matrix()
print(M)
plots.plot_granger_heatmap(M).savefig("gc_heatmap.png")
print(tables.report_granger_pairs(gc.causality_pairs(threshold=0.05)))
15. Plots
Module hybridmetrics.plots. Every function returns a matplotlib.figure.Figure and prints a small footer with the package version and author info.
| Function | Purpose |
|---|---|
plot_series(data, title) |
Multivariate line plot. |
plot_acf_pacf(series, lags=40) |
ACF + PACF panels. |
plot_residual_diagnostics(resid) |
4-panel: time path, ACF, histogram, Q-Q. |
plot_cusum(cusum_dict) |
CUSUM + CUSUMSQ with 5% bands. |
plot_forecast(y_train, y_test, y_pred, sigma) |
Forecast fan chart. |
plot_midas_weights(weights_list, labels) |
MIDAS weighting kernel. |
plot_tv_coefficients(coefs, var_names, lag) |
DeepTVAR small-multiples grid. |
plot_granger_heatmap(matrix) |
Neural-GC causality heatmap. |
plot_attention(attention, layer) |
Transformer self-attention heatmap. |
plot_loss_history(history) |
Training NLL curve. |
plot_ic_grid(ic_table, value_col) |
AIC/BIC heatmap over (p, q). |
plot_actual_vs_fitted(y_true, y_pred) |
45° diagnostic. |
hm.plots.set_style() # apply the paper-quality theme
fig = hm.plots.plot_residual_diagnostics(model.resid)
fig.savefig("residuals.pdf", bbox_inches="tight")
16. Tables
Module hybridmetrics.tables. Each helper wraps tabulate with a header block carrying the package version and author.
tables.fmt_table(df, title, floatfmt=".4f", tablefmt="github") -> str
tables.report_unit_root_battery(df) -> str
tables.report_residual_diagnostics(df) -> str
tables.report_coefficients(df, title="Coefficient table") -> str
tables.report_info_criteria(ic, title="Model fit") -> str
tables.report_forecast_metrics(metrics, title="Forecast accuracy") -> str
tables.report_bounds_test(d) -> str
tables.report_granger_pairs(df) -> str
tablefmt accepts every tabulate format — "github", "latex", "html", "plain", etc.
17. Diagnostics aggregators
Module hybridmetrics.diagnostics.
diagnostics.pre_battery(data) -> pd.DataFrame
diagnostics.post_battery(resid, exog=None, lags=10, model_results=None) -> pd.DataFrame
diagnostics.stability_battery(resid) -> dict
diagnostics.fit_summary(model, y_true=None, y_pred=None) -> dict
fit_summary collects info-criteria, forecast metrics, and the model summary in a single dict suitable for serialisation.
18. Synthetic data utilities
Module hybridmetrics.data.
data.make_var(T=300, k=3, p=2, rho=0.5, sigma=0.5, seed=0) -> pd.DataFrame
data.make_ardl(T=250, beta=(0.4, 0.3, -0.2), alpha=(0.6,), sigma=0.4, seed=1) -> pd.DataFrame
data.make_mixed_frequency(T_low=80, m=3, seed=2) -> (pd.Series, pd.DataFrame)
data.stack_high_freq(x_high, m) -> np.ndarray # reshape (T_low*m,) -> (T_low, m)
Use these to reproduce all examples and tests without external data.
19. End-to-end workflows
18.1 ARDL-MIDAS-DNN (hybrid mixed-frequency forecasting)
import hybridmetrics as hm
from hybridmetrics import data, plots, tables
y, X = data.make_mixed_frequency(T_low=200, m=3)
print(tables.report_unit_root_battery(hm.run_unit_root_battery(y.to_frame())))
hyb = hm.ARDLMIDASDNN(p=1, weighting="beta", hidden=(32, 16), epochs=300)
hyb.fit(y, [X.values])
print(hyb.summary())
print(tables.report_info_criteria(hyb.info_criteria()))
print(tables.report_residual_diagnostics(hm.residual_diagnostics(hyb.resid)))
plots.plot_actual_vs_fitted(y[hyb.fittedvalues.index], hyb.fittedvalues).savefig("avf.png")
plots.plot_residual_diagnostics(hyb.resid).savefig("resid.png")
plots.plot_cusum(hm.cusum_cusumsq(hyb.resid)).savefig("cusum.png")
18.2 DeepTVAR (LSTM time-varying VAR)
df = data.make_var(T=400, k=3, p=2)
m = hm.DeepTVAR(p=2, hidden=32, epochs=300).fit(df)
plots.plot_tv_coefficients(m.coefs_, var_names=df.columns, lag=0).savefig("A1.png")
plots.plot_loss_history(m.loss_history()).savefig("loss.png")
print(m.forecast(h=12))
18.3 Neural Granger causality
df = data.make_var(T=500, k=4, p=2)
gc = hm.NeuralGrangerCausality(method="cmlp", lag=5, lam=0.05, epochs=400).fit(df)
plots.plot_granger_heatmap(gc.causality_matrix()).savefig("gc.png")
print(tables.report_granger_pairs(gc.causality_pairs(threshold=0.05)))
20. Citation
Roudane, M. (2026). hybridmetrics: A unified Python library for hybrid econometrics
and deep learning models (ARDL-MIDAS-DNN, DeepTVAR, DeepVARwT, Neural Granger
Causality). Version 0.1.0. https://github.com/merwanroudane/hybridmetrics
BibTeX:
@software{roudane2026hybridmetrics,
author = {Roudane, Merwan},
title = {hybridmetrics: A unified Python library for hybrid econometrics and deep learning},
year = {2026},
version = {0.1.0},
url = {https://github.com/merwanroudane/hybridmetrics}
}
21. References
- Pesaran, M. H., Shin, Y. & Smith, R. J. (2001). Bounds Testing Approaches to the Analysis of Level Relationships. Journal of Applied Econometrics, 16(3), 289–326.
- Engle, R. F. & Granger, C. W. J. (1987). Co-integration and Error Correction. Econometrica, 55(2), 251–276.
- Phillips, P. C. B. & Ouliaris, S. (1990). Asymptotic Properties of Residual Based Tests for Cointegration. Econometrica, 58(1), 165–193.
- Ghysels, E., Santa-Clara, P. & Valkanov, R. (2004). The MIDAS Touch: Mixed Data Sampling Regression Models. Working paper.
- Zhang, G. P. (2003). Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing, 50, 159–175.
- Wang, X. & Yuan, Y. (2023). DeepTVAR: Deep learning for a time-varying VAR model with extension to integrated VAR. International Journal of Forecasting.
- Wang, X., Yuan, Y. & Liu, S. (2024). DeepVARwT: Deep Learning for a VAR Model with Trend. Energies / preprint.
- Tank, A., Covert, I., Foti, N., Shojaie, A. & Fox, E. B. (2018). Neural Granger Causality. IEEE Transactions on Pattern Analysis and Machine Intelligence.
- Chalkiadakis, I., Peters, G. W. & Ames, M. (2023). Hybrid ARDL-MIDAS-Transformer time-series regressions for multi-topic crypto market sentiment driven by price and technology factors. Digital Finance, 5, 295–365. https://doi.org/10.1007/s42521-023-00079-9
- Roudane, M. et al. (2024). ARDL-MIDAS-DNN: a hybrid forecasting framework for cryptocurrency markets. Working paper / preprint.
Built and maintained by Dr. Merwan Roudane — feel free to open issues or pull requests on https://github.com/merwanroudane/hybridmetrics.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hybridmetrics-0.1.0.tar.gz.
File metadata
- Download URL: hybridmetrics-0.1.0.tar.gz
- Upload date:
- Size: 49.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e346226245dda903690fcee62cc76ad07629289965f2acc19be1c2f597b2e07f
|
|
| MD5 |
4961686a62db62b7dc1e34bc1fef21a5
|
|
| BLAKE2b-256 |
73d3ef4d2b27d3df071f270f8ed62af6b830ba00149bd1670b597cd242d9c808
|
File details
Details for the file hybridmetrics-0.1.0-py3-none-any.whl.
File metadata
- Download URL: hybridmetrics-0.1.0-py3-none-any.whl
- Upload date:
- Size: 48.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6d235686584c3f538b0923deb5524523e374a6556095ef527e9a1a22677d10bb
|
|
| MD5 |
9af97ebb05530ca6c71e6410e34dac53
|
|
| BLAKE2b-256 |
9d605d904e22146fdb24e799e1befac62a836fe84319a866ebf2e269b236f269
|