Event-driven backtesting and research simulation platform for systematic trading strategies
Project description
sandtable
Event-driven backtesting for systematic trading strategies. Events are processed in strict timestamp order via a priority queue (MARKET_DATA → SIGNAL → ORDER → FILL). The data handler only exposes historical data up to the current bar, and orders are filled with configurable slippage, market impact, and commissions.
Installation
pip install sandtable
See sandtable on PyPI for available versions.
Development setup
Requires Python 3.13+ and uv.
git clone https://github.com/westimator/sandtable.git
cd sandtable
uv sync
Docker services (optional)
MySQL is available via Docker Compose for result persistence. This is optional; SQLite works out of the box with no external services.
Requires the Docker Compose plugin. If docker compose version prints an error, install the plugin:
# install the plugin
sudo mkdir -p /usr/local/lib/docker/cli-plugins
sudo curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-linux-x86_64 \
-o /usr/local/lib/docker/cli-plugins/docker-compose
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
# verify
docker compose version
Then start services:
# start in background
docker compose up -d
# stop services (data is preserved in Docker volumes)
docker compose down
This starts:
- MySQL 8.0 on port 3306 (user:
sandtable, password:sandtable, database:sandtable)
Getting started
Run the demo
Runs a full showcase with zero arguments - data loading, strategy backtests with realistic execution, risk management, parameter sweeps, walk-forward analysis, statistical significance tests, strategy comparison, persistence, and PDF report generation. Reports are saved to output/, results persisted to sandtable.db.
# default: SQLite result store, in-memory data from bundled CSVs
uv run python demo.py
# use MySQL for result persistence (requires docker compose up -d)
uv run python demo.py --store mysql
Launch the dashboard
# default: SQLite result store, in-memory data from bundled CSVs
uv run streamlit run app.py
# use MySQL for result persistence (requires docker compose up -d)
uv run streamlit run app.py -- --result-backend mysql
# see all options (connection params, db path, etc.)
uv run streamlit run app.py -- --help
Opens a local Streamlit dashboard at http://localhost:8501 with five pages:
- Backtest: Run a single strategy backtest with configurable execution and risk parameters.
- Sweep: Parameter grid search with results table and 2D heatmap.
- Walkforward: Walk-forward analysis with per-fold metrics and stitched OOS equity curve.
- Compare: Side-by-side strategy comparison with overlaid equity curves and correlation matrix.
- Runs: Browse, inspect, and manage persisted runs.
Backend configuration (result store) is set once at startup via CLI flags and shown read-only on the Home page. Per-run settings (strategy, data source, symbols, dates, execution, risk) are configured in the sidebar.
Python API
One-liner backtest
from sandtable import (
run_backtest, AbstractStrategy, SignalEvent,
MarketDataEvent, Direction, FixedSlippage,
DataHandler, CSVProvider,
)
class MyStrategy(AbstractStrategy):
def generate_signal(self, bar: MarketDataEvent) -> SignalEvent | None:
closes = self.get_historical_closes(20, symbol=bar.symbol)
if len(closes) < 20:
return None
mean = sum(closes) / len(closes)
if bar.close < mean * 0.98:
return SignalEvent(
timestamp=bar.timestamp, symbol=bar.symbol,
direction=Direction.LONG, strength=1.0,
)
return None
data = DataHandler(provider=CSVProvider("data/fixtures"), universe=["SPY"])
data.load("2018-01-01", "2023-12-31")
result = run_backtest(
strategy=MyStrategy(),
data=data,
slippage=FixedSlippage(bps=5),
commission=0.005,
)
print(result.metrics)
Parameter sweep
from sandtable import Metric, run_parameter_sweep, MeanReversionStrategy
sweep = run_parameter_sweep(
strategy_class=MeanReversionStrategy,
param_grid={"lookback": [10, 20, 30], "threshold": [1.5, 2.0, 2.5]},
data=data,
metric=Metric.SHARPE_RATIO,
)
print(sweep.best_params)
print(sweep.to_dataframe())
Walk-forward analysis
from sandtable import run_walkforward, MACrossoverStrategy
wf = run_walkforward(
strategy_cls=MACrossoverStrategy,
param_grid={"fast_period": [5, 10, 15], "slow_period": [20, 30, 40]},
data=data,
train_window=252,
test_window=126,
optimization_metric=Metric.SHARPE_RATIO,
)
print(f"OOS Sharpe: {wf.oos_sharpe:.2f}")
Risk management
from sandtable import (
RiskManager, MaxLeverageRule, MaxDrawdownRule,
MaxDailyLossRule, MaxPositionSizeRule,
)
risk_manager = RiskManager(rules=[
MaxLeverageRule(max_leverage=2.0),
MaxDrawdownRule(max_drawdown_pct=0.15),
MaxDailyLossRule(max_daily_loss_pct=0.03),
MaxPositionSizeRule(max_position_pct=0.25),
])
result = run_backtest(
strategy=MyStrategy(),
data=data,
risk_manager=risk_manager,
)
Statistical significance
sig = result.significance_tests(n_simulations=1000, random_seed=42)
for name, sr in sig.items():
print(f"{name}: p={sr.p_value:.4f} {'*' if sr.is_significant else ''}")
Persistence
from sandtable import SQLiteResultStore
store = SQLiteResultStore("sandtable.db")
# auto-persist during backtest
result = run_backtest(strategy=MyStrategy(), data=data, result_store=store)
# browse runs
for run in store.list_runs(min_sharpe=1.0):
print(f"{run.strategy_name}: Sharpe={run.sharpe_ratio:.2f}")
# reload a run
config, result = store.load_run(run.run_id)
MySQL is a drop-in replacement:
from sandtable import MySQLResultStore
store = MySQLResultStore(
host="localhost",
port=3306,
user="sandtable",
password="sandtable",
database="sandtable",
)
Reports
from sandtable import generate_pdf_tearsheet, generate_risk_report, generate_comparison_report
generate_pdf_tearsheet(result, output_path="tearsheet.pdf")
generate_risk_report(result, output_path="risk_report.pdf")
generate_comparison_report(
{"Strategy A": result_a, "Strategy B": result_b},
output_path="comparison.pdf",
)
How it works
The core is an event loop. On each bar the DataHandler emits a MarketDataEvent, the strategy decides whether to emit a SignalEvent, the portfolio sizes it into an OrderEvent, the risk manager approves/resizes/rejects it, and the execution simulator fills it as a FillEvent with slippage, spread, and commission applied. All events are frozen dataclasses. The queue is a heap sorted by (timestamp, priority) so events at the same timestamp always process in the right order.
Three orthogonal enums control where data comes from and where it goes:
| Enum | Values | Purpose |
|---|---|---|
DataSource |
csv, yfinance |
where market data originates |
DataBackend |
memory |
where market data lives at query time |
ResultBackend |
sqlite, mysql |
where backtest results are persisted |
Event types
| Event | Key fields | Emitted by | Consumed by |
|---|---|---|---|
MarketDataEvent |
symbol, timestamp, OHLCV | DataHandler | Strategy, Portfolio |
SignalEvent |
symbol, direction, strength | Strategy | Portfolio |
OrderEvent |
symbol, direction, quantity, order_type | Portfolio (after risk check) | ExecutionSimulator |
FillEvent |
symbol, fill_price, commission, slippage, market_impact | ExecutionSimulator | Portfolio |
RiskBreachEvent |
rule_name, action, breach_value, threshold | RiskManager | logged, not queued |
Execution models
from sandtable.execution import (
ExecutionConfig, ExecutionSimulator,
ZeroSlippage, FixedSlippage, SpreadSlippage,
NoMarketImpact, SquareRootImpactModel,
)
# no transaction costs (unrealistic baseline)
executor = ExecutionSimulator(
slippage_model=ZeroSlippage(),
impact_model=NoMarketImpact(),
)
# realistic costs
executor = ExecutionSimulator(
config=ExecutionConfig(
commission_per_share=0.005,
commission_minimum=1.0,
),
slippage_model=FixedSlippage(bps=5),
impact_model=SquareRootImpactModel(eta=0.1),
)
Risk rules
Seven composable rules sit between signal generation and order submission:
| Rule | What it does |
|---|---|
MaxPositionSizeRule |
caps single-position value as fraction of equity |
MaxPortfolioExposureRule |
caps gross portfolio exposure |
MaxLeverageRule |
caps gross exposure / equity ratio |
MaxOrderSizeRule |
hard reject on orders exceeding a quantity limit |
MaxDailyLossRule |
blocks all trading after intraday loss threshold |
MaxDrawdownRule |
halts strategy permanently after drawdown threshold |
MaxConcentrationRule |
caps single-position value as fraction of gross exposure |
All rejections and resizes are logged as RiskBreachEvent records.
Metrics
| Category | Metrics |
|---|---|
| Returns | total_return, cagr |
| Risk | sharpe_ratio, sortino_ratio, max_drawdown |
| Trades | num_trades, win_rate, profit_factor, avg_trade_pnl |
Project structure
sandtable/
├── src/sandtable/
│ ├── core/ # events, event queue, backtest engine, result
│ ├── strategy/ # AbstractStrategy, MA crossover, mean reversion, buy-and-hold
│ ├── portfolio/ # position tracking, cash, equity curve, P&L
│ ├── execution/ # slippage, spread, market impact, commissions
│ ├── risk/ # risk manager, 7 composable rules, VaR
│ ├── data/ # Instrument, Equity, Future, Universe, TradingHours
│ ├── data_engine/ # CSV/YFinance providers, caching, DataHandler
│ ├── data_types/ # DataSource, DataBackend, ResultBackend, Metric enums
│ ├── research/ # parameter sweeps, walk-forward, strategy comparison
│ ├── stats/ # permutation test, t-test, block bootstrap
│ ├── metrics/ # Sharpe, Sortino, CAGR, drawdown, trade stats
│ ├── persistence/ # SQLite and MySQL result stores
│ ├── report/ # HTML tearsheet and comparison
│ ├── reporting/ # PDF tearsheet, TCA, risk reports
│ ├── viz/ # matplotlib charts, animation
│ ├── ui/ # shared Streamlit components
│ ├── utils/ # logging, exceptions, CLI helpers
│ ├── api.py # run_backtest(), run_parameter_sweep()
│ └── config.py # Settings with BACKTESTER_* env var backing
├── pages/ # Streamlit pages (Backtest, Sweep, Walkforward, Compare, Runs)
├── tests/unit/ # 380+ tests
├── data/fixtures/ # bundled CSVs (SPY, QQQ, IWM, AAPL, MSFT 2018-2023)
├── demo.py # full-feature showcase script
├── app.py # Streamlit entry point
├── docker-compose.yml # MySQL 8
└── pyproject.toml
Running tests
uv run pytest tests/ -q # all tests
uv run pytest tests/ -v # verbose
uv run pytest tests/unit/strategy/test_ma_crossover.py -v # single file
uv run ruff check . # lint
Further reading
- Backtesting
- Event-driven architecture
- Moving average crossover
- Mean reversion
- Sharpe ratio
- Walk-forward analysis
- Value at Risk
License
See LICENSE file.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sandtable-0.2.0.tar.gz.
File metadata
- Download URL: sandtable-0.2.0.tar.gz
- Upload date:
- Size: 5.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
164b370792eaba0f64864c18c0907da706408f1763c461078605f6e0d5a4c59b
|
|
| MD5 |
7bb192a608b696eb0020c19182ba75ed
|
|
| BLAKE2b-256 |
fba4c29c4c9c5ae4476703ea18dd6b90ba297eb2b451f927c28439d9581ef0f9
|
Provenance
The following attestation bundles were made for sandtable-0.2.0.tar.gz:
Publisher:
publish.yml on westimator/sandtable
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sandtable-0.2.0.tar.gz -
Subject digest:
164b370792eaba0f64864c18c0907da706408f1763c461078605f6e0d5a4c59b - Sigstore transparency entry: 990311952
- Sigstore integration time:
-
Permalink:
westimator/sandtable@3c98fb0eb6aacb629aa47ebb70f88640509b2239 -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/westimator
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@3c98fb0eb6aacb629aa47ebb70f88640509b2239 -
Trigger Event:
release
-
Statement type:
File details
Details for the file sandtable-0.2.0-py3-none-any.whl.
File metadata
- Download URL: sandtable-0.2.0-py3-none-any.whl
- Upload date:
- Size: 114.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
efe70eece0ec86b507308deb7ae8c716505ee3f34e48ea0963b285c3bc43cd6f
|
|
| MD5 |
d75ac502d7d5ffa607662232488d4302
|
|
| BLAKE2b-256 |
1cc01354138a217acd0f55ebc7991babd83d546a10ead50f9c328e941e900911
|
Provenance
The following attestation bundles were made for sandtable-0.2.0-py3-none-any.whl:
Publisher:
publish.yml on westimator/sandtable
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sandtable-0.2.0-py3-none-any.whl -
Subject digest:
efe70eece0ec86b507308deb7ae8c716505ee3f34e48ea0963b285c3bc43cd6f - Sigstore transparency entry: 990311986
- Sigstore integration time:
-
Permalink:
westimator/sandtable@3c98fb0eb6aacb629aa47ebb70f88640509b2239 -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/westimator
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@3c98fb0eb6aacb629aa47ebb70f88640509b2239 -
Trigger Event:
release
-
Statement type: