Skip to main content

Event-driven backtesting framework with realistic execution modeling

Project description

sandtable

Tests

A Python backtesting framework where all components communicate exclusively through a central event queue. This design enforces temporal causality and prevents look-ahead bias by construction.

Why event-driven?

Traditional backtesting frameworks often allow direct access to future data, making it easy to accidentally introduce look-ahead bias. This framework prevents that by design:

  1. Temporal causality: Events are processed in strict timestamp order via a priority queue
  2. No future data access: The DataHandler only exposes historical data up to the current bar
  3. Realistic execution: Orders are filled with configurable slippage, market impact, and commissions
  4. Clear data flow: Events flow in one direction: MARKET_DATA → SIGNAL → ORDER → FILL
┌─────────────┐     ┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│ DataHandler │────▶│  Strategy   │────▶│  Portfolio  │────▶│  Executor   │
│ (bars)      │     │  (signals)  │     │  (orders)   │     │  (fills)    │
└─────────────┘     └─────────────┘     └─────────────┘     └─────────────┘
       │                   │                   │                   │
       └───────────────────┴───────────────────┴───────────────────┘
                                    │
                           ┌────────▼────────┐
                           │   Event Queue   │
                           │  (priority by   │
                           │   timestamp)    │
                           └─────────────────┘

Installation

pip install sandtable

See sandtable on PyPI for available versions.

Development setup

Requires Python 3.12 and uv.

# install uv (if you don't have it)
curl -LsSf https://astral.sh/uv/install.sh | sh

# create venv and install dependencies
uv sync

# install with all extras (yfinance, matplotlib, plotly)
uv pip install -e ".[all]"

# or with dev dependencies (pytest, ruff, viz, reports)
uv pip install -e ".[dev]"

Quick start

One-liner API

from sandtable import run_backtest, AbstractStrategy, SignalEvent, MarketDataEvent, Direction, FixedSlippage

class MeanReversion(AbstractStrategy):
    lookback: int = 20
    threshold: float = 2.0

    def generate_signal(self, bar: MarketDataEvent) -> SignalEvent | None:
        closes = self.get_historical_closes(self.lookback)
        if len(closes) < self.lookback:
            return None
        mean = sum(closes) / len(closes)
        std = (sum((c - mean) ** 2 for c in closes) / len(closes)) ** 0.5
        if std == 0:
            return None
        z_score = (bar.close - mean) / std
        if z_score < -self.threshold:
            return SignalEvent(
                timestamp=bar.timestamp, symbol=bar.symbol,
                direction=Direction.LONG, strength=1.0,
            )
        return None

result = run_backtest(
    strategy=MeanReversion(),
    symbols="SPY",
    start="2022-01-01", end="2023-12-31",
    slippage=FixedSlippage(bps=5),
    commission=0.005,
)
print(result.metrics)
result.tearsheet("tearsheet.pdf")

Parameter sweep

from sandtable import Metric, run_parameter_sweep

sweep = run_parameter_sweep(
    strategy_class=MeanReversion,
    param_grid={"lookback": [10, 20, 30], "threshold": [1.5, 2.0, 2.5]},
    symbols="SPY",
    start="2022-01-01", end="2023-12-31",
    metric=Metric.SHARPE_RATIO,
)
print(sweep.best_params)
print(sweep.to_dataframe())

Run the example

uv run python examples/quick_start.py

Usage

Basic backtest (manual wiring)

from sandtable import CSVDataHandler, MACrossoverStrategy
from sandtable.core import Backtest
from sandtable.execution import ExecutionConfig, ExecutionSimulator, FixedSlippage
from sandtable.portfolio import Portfolio

# set up components
data = CSVDataHandler("data/sample_ohlcv.csv", "SPY")
strategy = MACrossoverStrategy(fast_period=10, slow_period=30)
portfolio = Portfolio(initial_capital=100_000)
executor = ExecutionSimulator(
    config=ExecutionConfig(commission_per_share=0.005),
    slippage_model=FixedSlippage(bps=5),
)

# run backtest
backtest = Backtest(data, strategy, portfolio, executor)
metrics = backtest.run()
print(metrics)

Custom strategy

from sandtable import AbstractStrategy, MarketDataEvent, SignalEvent, Direction

class MyStrategy(AbstractStrategy):
    def generate_signal(self, bar: MarketDataEvent) -> SignalEvent | None:
        closes = self.get_historical_closes(20)
        if len(closes) < 20:
            return None  # warmup period

        # [your logic here]
        if closes[-1] > sum(closes) / len(closes):
            return SignalEvent(
                timestamp=bar.timestamp,
                symbol=bar.symbol,
                direction=Direction.LONG,
                strength=1.0,
            )
        return None

Multi-symbol backtest

from sandtable import run_backtest

result = run_backtest(
    strategy=MyStrategy(),
    symbols=["SPY", "QQQ", "IWM"],
    start="2022-01-01", end="2023-12-31",
)

Tearsheet and comparison

# Single strategy tearsheet
result.tearsheet("tearsheet.pdf")

# Compare multiple strategies
from sandtable import compare_strategies

compare_strategies(
    {"Strategy A": result_a, "Strategy B": result_b},
    output_path="comparison.pdf",
)

Execution models

from sandtable.execution import (
    ExecutionConfig, ExecutionSimulator,
    ZeroSlippage, FixedSlippage, SpreadSlippage,
    NoMarketImpact, SquareRootImpactModel,
)

# no transaction costs (unrealistic baseline)
executor = ExecutionSimulator(
    slippage_model=ZeroSlippage(),
    impact_model=NoMarketImpact(),
)

# realistic costs
executor = ExecutionSimulator(
    config=ExecutionConfig(
        commission_per_share=0.005,
        commission_minimum=1.0,
    ),
    slippage_model=FixedSlippage(bps=5),
    impact_model=SquareRootImpactModel(eta=0.1),
)

Project structure

src/sandtable/
├── __init__.py        # Public API exports
├── api.py             # run_backtest(), run_parameter_sweep()
├── config.py          # Configuration dataclasses
├── core/              # Events, queue, backtest engine, result
├── data_handlers/     # DataHandler protocol, CSV, yfinance, multi-symbol
├── strategy/          # Strategy base class and implementations
├── execution/         # Slippage, impact, and fill simulation
├── portfolio/         # Position and cash management
├── metrics/           # Performance calculation
├── report/            # PDF tearsheet and strategy comparison
├── utils/             # Shared utilities
└── viz/               # matplotlib charts and animation

Running tests

# run all tests
uv run python -m pytest

# run with verbose output
uv run python -m pytest -v

# run specific test file
uv run python -m pytest tests/core/test_event_queue.py

# run with coverage
uv run python -m coverage run --include="src/sandtable/*" -m pytest tests/
uv run python -m coverage report --show-missing

Design decisions

  1. Lookahead Prevention: DataHandler.get_historical_bars(n) only returns data before the current index
  2. Event Ordering: Priority queue with (timestamp, counter) ensures correct ordering and FIFO for same-timestamp events
  3. Fill Price Bounds: Fill prices are clamped to the bar's [low, high] range
  4. Short Positions: Cash increases on short sale, decreases on cover, with correct P&L tracking
  5. Warmup Period: Strategies return None until they have enough data for their indicators
  6. Multi-symbol: MultiDataHandler merges bars from multiple sources via min-heap for correct temporal ordering

Performance metrics

The PerformanceMetrics dataclass includes:

Category Metrics
Returns total_return, cagr
Risk sharpe_ratio, sortino_ratio, max_drawdown
Trades num_trades, win_rate, profit_factor, avg_trade_pnl

Further reading

Related concepts:

License

See LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sandtable-0.1.1.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sandtable-0.1.1-py3-none-any.whl (52.1 kB view details)

Uploaded Python 3

File details

Details for the file sandtable-0.1.1.tar.gz.

File metadata

  • Download URL: sandtable-0.1.1.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sandtable-0.1.1.tar.gz
Algorithm Hash digest
SHA256 d9ba01399ee5db916770f5360211b7cb036394a0d0f763d8b45d2698e6989977
MD5 4ca6553ba4e931cb33db0272c26a07af
BLAKE2b-256 090fdd3489ecde4b4ce3cf5e7d439983121429d00a40fbe8f52088f5e5322a07

See more details on using hashes here.

Provenance

The following attestation bundles were made for sandtable-0.1.1.tar.gz:

Publisher: publish.yml on westimator/sandtable

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sandtable-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: sandtable-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 52.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sandtable-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8cbd65bceb5366e3ae8eb531610ab6b9a7c77db532b86943a53ff664a58544dc
MD5 c2bf615ec287bf833f8aac400ba654d4
BLAKE2b-256 30b385ec2c7313cf3f62b120fa11f944618777223e02a0f7cf7ed0ae76aa304e

See more details on using hashes here.

Provenance

The following attestation bundles were made for sandtable-0.1.1-py3-none-any.whl:

Publisher: publish.yml on westimator/sandtable

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page