Early-stage research framework for backtesting systematic credit strategies (not for production use)
Project description
Aponyx
Early-stage research framework — Not for production use
A modular Python framework for developing and backtesting systematic credit strategies.
Type-safe, reproducible research environment for tactical fixed-income strategies with clean separation between strategy logic, data infrastructure, and backtesting workflows.
Key Features
- Type-safe data loading with schema validation (Parquet, CSV, Bloomberg Terminal)
- Modular signal framework with composable transformations and registry management
- Deterministic backtesting with transaction cost modeling and comprehensive metrics
- Interactive visualization with Plotly charts (equity curves, signals, drawdown)
- File-based persistence with metadata tracking and versioning
- Strategy governance with centralized registry and configuration management
Installation
From PyPI (Recommended)
pip install aponyx
Optional dependencies:
# Visualization (Plotly, Streamlit)
pip install aponyx[viz]
# Bloomberg Terminal support (requires manual blpapi install)
pip install aponyx[bloomberg]
# Development tools
pip install aponyx[dev]
From Source
Requires Python 3.12 and uv:
git clone https://github.com/stabilefrisur/aponyx.git
cd aponyx
uv sync # Install dependencies
uv sync --extra viz # Include visualization
Bloomberg Terminal Setup (Optional)
Note: Bloomberg data loading requires an active Terminal session and manual
blpapiinstallation.
- Install
blpapiby following the instructions here: Bloomberg API Library - Install Bloomberg extra:
pip install aponyx[bloomberg]
File-based data loading (FileSource) works without Bloomberg dependencies.
Quick Start
1. Generate Synthetic Data (Development)
uv run python -m aponyx.notebooks.generate_synthetic_data
This creates test data in data/raw/synthetic/.
2. Or Download Bloomberg Data (Production)
Run the 01_data_download.ipynb notebook to fetch from Bloomberg Terminal.
Data is saved to data/raw/bloomberg/.
3. Run Analysis
from aponyx.data import fetch_cdx, fetch_etf, FileSource
from aponyx.models import compute_cdx_etf_basis, SignalConfig
from aponyx.backtest import run_backtest, BacktestConfig
from aponyx.evaluation.performance import compute_all_metrics
# Load validated market data (automatic caching from raw/)
cdx_df = fetch_cdx(FileSource("data/raw/synthetic/cdx_ig_5y.parquet"), security="cdx_ig_5y")
etf_df = fetch_etf(FileSource("data/raw/synthetic/hyg.parquet"), security="hyg")
# Generate signal with configuration
signal_config = SignalConfig(lookback=20, min_periods=10)
signal = compute_cdx_etf_basis(cdx_df, etf_df, signal_config)
# Evaluate signal-product suitability (optional pre-backtest assessment)
from aponyx.evaluation.suitability import evaluate_signal_suitability, SuitabilityConfig
suitability_config = SuitabilityConfig(rolling_window=252) # ~1 year daily data
result = evaluate_signal_suitability(signal, cdx_df["spread"], suitability_config)
print(f"Suitability score: {result.composite_score:.2f}")
print(f"Stability: {result.sign_consistency_ratio:.1%} sign consistency, CV={result.beta_cv:.3f}")
# Run backtest with transaction costs
backtest_config = BacktestConfig(
entry_threshold=1.5,
exit_threshold=0.75,
transaction_cost_bps=1.0
)
results = run_backtest(signal, cdx_df["spread"], backtest_config)
# Compute comprehensive performance metrics
metrics = compute_all_metrics(results.pnl, results.positions)
# Analyze results
print(f"Sharpe Ratio: {metrics.sharpe_ratio:.2f}")
print(f"Max Drawdown: ${metrics.max_drawdown:,.0f}")
print(f"Hit Rate: {metrics.hit_rate:.2%}")
Bloomberg Terminal alternative:
from aponyx.data import BloombergSource
source = BloombergSource()
cdx_df = fetch_cdx(source, security="cdx_ig_5y")
Architecture
Aponyx follows a layered architecture with clean separation of concerns:
| Layer | Purpose | Key Modules |
|---|---|---|
| Data | Load, validate, transform market data | fetch_cdx, fetch_vix, fetch_etf, apply_transform, FileSource, BloombergSource |
| Models | Generate signals for independent evaluation | compute_cdx_etf_basis, compute_cdx_vix_gap, SignalRegistry |
| Evaluation | Pre-backtest screening (rolling window stability) and post-backtest analysis | evaluate_signal_suitability, analyze_backtest_performance, PerformanceRegistry |
| Backtest | Simulate execution and generate P&L | run_backtest, BacktestConfig, StrategyRegistry |
| Visualization | Interactive charts and dashboards | plot_equity_curve, plot_signal, plot_drawdown |
| Persistence | Save/load data with metadata registry | save_parquet, load_parquet, DataRegistry |
Data Storage
data/
raw/ # Original source data (permanent)
bloomberg/ # Bloomberg Terminal downloads
synthetic/ # Synthetic test data
cache/ # Temporary performance cache (regenerable)
processed/ # Computed signals and features (regenerable)
registry.json # Dataset tracking catalog
Research Workflow
Raw Data (Parquet/CSV/Bloomberg)
↓
Data Layer (load, validate, transform)
↓
Models Layer (signal computation)
↓
Evaluation Layer (signal-product suitability)
↓
Backtest Layer (execution simulation)
↓
Evaluation Layer (performance metrics & analysis)
↓
Visualization Layer (charts)
↓
Persistence Layer (results)
Research Notebooks
Complete workflow notebooks are included in the package for end-to-end research workflows.
Access installed notebooks:
# Locate notebook directory
from pathlib import Path
import aponyx
notebooks_dir = Path(aponyx.__file__).parent / "notebooks"
print(notebooks_dir)
Workflow notebooks:
| Notebook | Description |
|---|---|
01_data_download.ipynb |
Download market data from Bloomberg Terminal |
02_signal_computation.ipynb |
Generate signals using SignalRegistry |
03_suitability_evaluation.ipynb |
Pre-backtest signal screening and evaluation |
04_backtest_execution.ipynb |
Execute backtests and save raw results |
05_performance_analysis.ipynb |
Comprehensive post-backtest performance analysis |
06_single_signal_template.ipynb |
End-to-end single-signal research template |
Note: Notebook 01 requires Bloomberg Terminal. Notebooks 02-06 work with any data source (file-based or Bloomberg).
Usage:
# Copy notebooks to your workspace
pip install aponyx[viz] # Install with notebook dependencies
python -c "from pathlib import Path; import aponyx, shutil; src = Path(aponyx.__file__).parent / 'notebooks'; shutil.copytree(src, 'notebooks')"
jupyter notebook notebooks/
Notebooks demonstrate the complete systematic research workflow from data acquisition through performance analysis.
Documentation
Documentation is included with the package and available after installation:
# Access docs programmatically
from aponyx.docs import get_docs_dir
docs_path = get_docs_dir()
print(docs_path) # Path to installed documentation
Available documentation:
Strategy & Research:
cdx_overlay_strategy.md- Investment thesis and pilot implementationsignal_registry_usage.md- Signal management workflowsignal_suitability_design.md- Pre-backtest evaluation frameworkperformance_evaluation_design.md- Post-backtest analysis framework
Architecture & Design:
governance_design.md- Registry, catalog, and config patternsraw_data_storage_design.md- Raw data storage conventions and hash-based namingcaching_design.md- Cache layer architecturevisualization_design.md- Chart architecture and patternslogging_design.md- Logging conventions and metadata
Development:
python_guidelines.md- Code standards and best practicesadding_data_providers.md- Provider extension guidedocumentation_structure.md- Documentation organization principles
All documentation is included in the package and also available on GitHub.
What's Included
Three pilot signals for CDX overlay strategies:
- CDX-ETF Basis - Flow-driven mispricing from cash-derivative basis
- CDX-VIX Gap - Cross-asset risk sentiment divergence
- Spread Momentum - Short-term continuation in credit spreads
Core capabilities: Type-safe data loading • Signal registry • Pre/post-backtest evaluation • Deterministic backtesting • Interactive visualizations • Comprehensive testing (>90% coverage)
Development
Running Tests
pytest # All tests
pytest --cov=aponyx # With coverage
pytest tests/models/ # Specific module
Code Quality
black src/ tests/ # Format code
ruff check src/ tests/ # Lint
mypy src/ # Type check
All tools are configured in pyproject.toml with project-specific settings.
Design Philosophy
Core Principles
- Modularity - Clean separation between data, models, backtest, and infrastructure
- Reproducibility - Deterministic outputs with seed control and metadata logging
- Type Safety - Strict type hints and runtime validation throughout
- Simplicity - Prefer functions over classes, explicit over implicit
- Transparency - Clear separation between strategy logic and execution
- No Legacy Support - Breaking changes without deprecation warnings; always use latest patterns
Signal Convention
All signals follow a consistent sign convention for interpretability:
- Positive values → Long credit risk (buy CDX = sell protection)
- Negative values → Short credit risk (sell CDX = buy protection)
This ensures clarity when evaluating signals independently or combining them in future research.
Requirements
- Python 3.12 (no backward compatibility with 3.11 or earlier)
- Modern type syntax (
str | None, notOptional[str]) - Optional: Bloomberg Terminal with
blpapifor live data
Breaking changes: This is an early-stage project under active development. Breaking changes may occur between versions without deprecation warnings or backward compatibility.
Contributing
This is an early-stage personal research project. See CONTRIBUTING.md for technical guidelines if you'd like to contribute.
Security
Security issues addressed on a best-effort basis. See SECURITY.md for reporting guidelines and scope.
License
MIT License - see LICENSE for details.
Links
- PyPI: https://pypi.org/project/aponyx/
- Repository: https://github.com/stabilefrisur/aponyx
- Issues: https://github.com/stabilefrisur/aponyx/issues
- Changelog: https://github.com/stabilefrisur/aponyx/blob/master/CHANGELOG.md
Maintained by stabilefrisur
Version: 0.1.10
Last Updated: November 16, 2025
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aponyx-0.1.10.tar.gz.
File metadata
- Download URL: aponyx-0.1.10.tar.gz
- Upload date:
- Size: 366.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b0f6ce55f3a00e9993ecc4f2d731ce42d19a934f921450415683c90b18884088
|
|
| MD5 |
9bc3a77ce6cae006d1243459e2c4c51b
|
|
| BLAKE2b-256 |
4b5944c8d1834b49ded2e8aca3c20e9b0bcb03ac20d91ff161d8a431c609797b
|
File details
Details for the file aponyx-0.1.10-py3-none-any.whl.
File metadata
- Download URL: aponyx-0.1.10-py3-none-any.whl
- Upload date:
- Size: 409.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
486e242c157a8709c252a103d050d061e736629ec0694aeb8a4ac218480ec493
|
|
| MD5 |
d03cad05a6c0ac2806893d1090814f97
|
|
| BLAKE2b-256 |
96b48cdbe59749c893be93bad40ea4ebffcb82a8d02ffa6b6426877bb5cf2d62
|