Skip to main content

Event-driven backtesting framework with realistic execution modeling

Project description

sandtable

A Python backtesting framework where all components communicate exclusively through a central event queue. This design enforces temporal causality and prevents look-ahead bias by construction.

Why event-driven?

Traditional backtesting frameworks often allow direct access to future data, making it easy to accidentally introduce look-ahead bias. This framework prevents that by design:

  1. Temporal causality: Events are processed in strict timestamp order via a priority queue
  2. No future data access: The DataHandler only exposes historical data up to the current bar
  3. Realistic execution: Orders are filled with configurable slippage, market impact, and commissions
  4. Clear data flow: Events flow in one direction: MARKET_DATA → SIGNAL → ORDER → FILL
┌─────────────┐     ┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│ DataHandler │────▶│  Strategy   │────▶│  Portfolio  │────▶│  Executor   │
│ (bars)      │     │  (signals)  │     │  (orders)   │     │  (fills)    │
└─────────────┘     └─────────────┘     └─────────────┘     └─────────────┘
       │                   │                   │                   │
       └───────────────────┴───────────────────┴───────────────────┘
                                    │
                           ┌────────▼────────┐
                           │   Event Queue   │
                           │  (priority by   │
                           │   timestamp)    │
                           └─────────────────┘

Setup

Requires Python 3.12 and uv.

# install uv (if you don't have it)
curl -LsSf https://astral.sh/uv/install.sh | sh

# create venv and install dependencies
uv sync

# install with all extras (yfinance, matplotlib, plotly)
uv pip install -e ".[all]"

# or with dev dependencies (pytest, ruff, viz, reports)
uv pip install -e ".[dev]"

Quick start

One-liner API

from sandtable import run_backtest, AbstractStrategy, SignalEvent, MarketDataEvent, Direction, FixedSlippage

class MeanReversion(AbstractStrategy):
    lookback: int = 20
    threshold: float = 2.0

    def generate_signal(self, bar: MarketDataEvent) -> SignalEvent | None:
        closes = self.get_historical_closes(self.lookback)
        if len(closes) < self.lookback:
            return None
        mean = sum(closes) / len(closes)
        std = (sum((c - mean) ** 2 for c in closes) / len(closes)) ** 0.5
        if std == 0:
            return None
        z_score = (bar.close - mean) / std
        if z_score < -self.threshold:
            return SignalEvent(
                timestamp=bar.timestamp, symbol=bar.symbol,
                direction=Direction.LONG, strength=1.0,
            )
        return None

result = run_backtest(
    strategy=MeanReversion(),
    symbols="SPY",
    start="2022-01-01", end="2023-12-31",
    slippage=FixedSlippage(bps=5),
    commission=0.005,
)
print(result.metrics)
result.tearsheet("tearsheet.html")

Parameter sweep

from sandtable import Metric, run_parameter_sweep

sweep = run_parameter_sweep(
    strategy_class=MeanReversion,
    param_grid={"lookback": [10, 20, 30], "threshold": [1.5, 2.0, 2.5]},
    symbols="SPY",
    start="2022-01-01", end="2023-12-31",
    metric=Metric.SHARPE_RATIO,
)
print(sweep.best_params)
print(sweep.to_dataframe())

Run the example

uv run python examples/quick_start.py

Usage

Basic backtest (manual wiring)

from sandtable import CSVDataHandler, MACrossoverStrategy
from sandtable.core import Backtest
from sandtable.execution import ExecutionConfig, ExecutionSimulator, FixedSlippage
from sandtable.portfolio import Portfolio

# set up components
data = CSVDataHandler("data/sample_ohlcv.csv", "SPY")
strategy = MACrossoverStrategy(fast_period=10, slow_period=30)
portfolio = Portfolio(initial_capital=100_000)
executor = ExecutionSimulator(
    config=ExecutionConfig(commission_per_share=0.005),
    slippage_model=FixedSlippage(bps=5),
)

# run backtest
backtest = Backtest(data, strategy, portfolio, executor)
metrics = backtest.run()
print(metrics)

Custom strategy

from sandtable import AbstractStrategy, MarketDataEvent, SignalEvent, Direction

class MyStrategy(AbstractStrategy):
    def generate_signal(self, bar: MarketDataEvent) -> SignalEvent | None:
        closes = self.get_historical_closes(20)
        if len(closes) < 20:
            return None  # warmup period

        # [your logic here]
        if closes[-1] > sum(closes) / len(closes):
            return SignalEvent(
                timestamp=bar.timestamp,
                symbol=bar.symbol,
                direction=Direction.LONG,
                strength=1.0,
            )
        return None

Multi-symbol backtest

from sandtable import run_backtest

result = run_backtest(
    strategy=MyStrategy(),
    symbols=["SPY", "QQQ", "IWM"],
    start="2022-01-01", end="2023-12-31",
)

Tearsheet and comparison

# Single strategy tearsheet
result.tearsheet("tearsheet.html")

# Compare multiple strategies
from sandtable import compare_strategies

compare_strategies(
    {"Strategy A": result_a, "Strategy B": result_b},
    output_path="comparison.html",
)

Execution models

from sandtable.execution import (
    ExecutionConfig, ExecutionSimulator,
    ZeroSlippage, FixedSlippage, SpreadSlippage,
    NoMarketImpact, SquareRootImpactModel,
)

# no transaction costs (unrealistic baseline)
executor = ExecutionSimulator(
    slippage_model=ZeroSlippage(),
    impact_model=NoMarketImpact(),
)

# realistic costs
executor = ExecutionSimulator(
    config=ExecutionConfig(
        commission_per_share=0.005,
        commission_minimum=1.0,
    ),
    slippage_model=FixedSlippage(bps=5),
    impact_model=SquareRootImpactModel(eta=0.1),
)

Project structure

sandtable/
├── src/sandtable/
│   ├── __init__.py        # Public API exports
│   ├── api.py             # run_backtest(), run_parameter_sweep()
│   ├── core/              # Events, queue, backtest engine, result
│   ├── data_handlers/     # DataHandler protocol, CSV, yfinance, multi-symbol
│   ├── strategy/          # Strategy base class and implementations
│   ├── execution/         # Slippage, impact, and fill simulation
│   ├── portfolio/         # Position and cash management
│   ├── metrics/           # Performance calculation
│   ├── report/            # HTML tearsheet and strategy comparison
│   └── viz/               # matplotlib charts and animation
├── tests/                 # Unit tests
├── data/                  # Sample OHLCV data
├── examples/              # Example scripts
└── pyproject.toml

Running tests

# run all tests
uv run python -m pytest

# run with verbose output
uv run python -m pytest -v

# run specific test file
uv run python -m pytest tests/core/test_event_queue.py

# run with coverage
uv run python -m coverage run --include="src/sandtable/*" -m pytest tests/
uv run python -m coverage report --show-missing

Design decisions

  1. Lookahead Prevention: DataHandler.get_historical_bars(n) only returns data before the current index
  2. Event Ordering: Priority queue with (timestamp, counter) ensures correct ordering and FIFO for same-timestamp events
  3. Fill Price Bounds: Fill prices are clamped to the bar's [low, high] range
  4. Short Positions: Cash increases on short sale, decreases on cover, with correct P&L tracking
  5. Warmup Period: Strategies return None until they have enough data for their indicators
  6. Multi-symbol: MultiDataHandler merges bars from multiple sources via min-heap for correct temporal ordering

Performance metrics

The PerformanceMetrics dataclass includes:

Category Metrics
Returns total_return, cagr
Risk sharpe_ratio, sortino_ratio, max_drawdown
Trades num_trades, win_rate, profit_factor, avg_trade_pnl

License

See LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sandtable-0.1.0.tar.gz (119.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sandtable-0.1.0-py3-none-any.whl (51.6 kB view details)

Uploaded Python 3

File details

Details for the file sandtable-0.1.0.tar.gz.

File metadata

  • Download URL: sandtable-0.1.0.tar.gz
  • Upload date:
  • Size: 119.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sandtable-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c60abe1c5902a5118fe28c8e6ea0d612902197b8ca374bc42156abb4015bd5b9
MD5 7e8c2ed2afc36163dc04e0738203089d
BLAKE2b-256 0c6ac1aefde4ca93ca3bdda627ced0910bee4b7e716cc52ac2293f1b04a43368

See more details on using hashes here.

Provenance

The following attestation bundles were made for sandtable-0.1.0.tar.gz:

Publisher: publish.yml on westimator/sandtable

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sandtable-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: sandtable-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 51.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sandtable-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 703d602d236be0a80397791a59ff573d34f6fd73e57685246aa14a86def864ad
MD5 4ec0df75597330bb703f210ce3b3df49
BLAKE2b-256 667791b6c8b7f8ec4a0136ee7f0b88f338172df46adea537bf639d035d6127da

See more details on using hashes here.

Provenance

The following attestation bundles were made for sandtable-0.1.0-py3-none-any.whl:

Publisher: publish.yml on westimator/sandtable

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page