Backtest trading strategies or train reinforcement learning agents with and event-driven market simulator.
Project description
Introduction
Backtest trading strategies or train reinforcement learning agents with
tradingenv
, an event-driven market simulator that implements the
OpenAI/gym protocol.
Installation
tradingenv supports Python 3.7 or newer versions. The following command line will install the latest software version.
pip install tradingenv
Notebooks, software tests and building the documentation require extra dependencies that can be installed with
pip install tradingenv[extra]
Example - Reinforcement Learning
The package is built upon the industry-standard gym and therefore can be used in conjunction with popular reinforcement learning frameworks including rllib and stable-baselines3.
from tradingenv import TradingEnv
from tradingenv.contracts import ETF
from tradingenv.spaces import BoxPortfolio
from tradingenv.state import IState
from tradingenv.rewards import RewardLogReturn
from tradingenv.broker.fees import BrokerFees
from tradingenv.policy import AbstractPolicy
import yfinance
# Load prices of SPY ETF and TLT ETF from Yahoo Finance as pandas.DataFrame.
prices = yfinance.Tickers(['SPY', 'TLT', 'TBIL']).history(period="12mo")['Close'].tz_localize(None)
# Specify contract type.
prices.columns = [ETF('SPY'), ETF('TLT'), ETF('TBIL')]
# Instance the trading environment.
env = TradingEnv(
action_space=BoxPortfolio([ETF('SPY'), ETF('TLT')], low=-1, high=+1, as_weights=True),
state=IState(),
reward=RewardLogReturn(),
prices=prices,
initial_cash=1_000_000,
latency=0, # seconds
steps_delay=1, # trades are implemented with a delay on one step
broker_fees=BrokerFees(
markup=0.005, # 0.5% broker markup on deposit rate
proportional=0.0001, # 0.01% fee of traded notional
fixed=1, # $1 per trade
),
)
# OpenAI/gym protocol. Run an episode in the environment.
# env can be passed to RL agents of ray/rllib or stable-baselines3.
obs = env.reset()
done = False
while not done:
action = env.action_space.sample()
obs, reward, done, info = env.step(action)
Example - Backtesting
Thanks to the event-driven design, tradingenv is agnostic with respect to the type and time-frequency of the events. This means that you can run simulations either using irregularly sampled trade and quotes data, daily closing prices, monthly economic data or alternative data. Financial instruments supported include stocks, ETF and futures.
class Portfolio6040(AbstractPolicy):
"""Implement logic of your investment strategy or RL agent here."""
def act(self, state):
"""Invest 60% of the portfolio in SPY ETF and 40% in TLT ETF."""
return [0.6, 0.4]
# Run the backtest.
track_record = env.backtest(
policy=Portfolio6040(),
risk_free=prices['TBIL'],
benchmark=prices['SPY'],
)
# The track_record object stores the results of your backtest.
track_record.tearsheet()
track_record.fig_net_liquidation_value()
Relevant projects
btgym: is an OpenAI Gym-compatible environment for
backtrader backtesting/trading library, designed to provide gym-integrated framework for running reinforcement learning experiments in [close to] real world algorithmic trading environments.
gym: A toolkit for developing and comparing reinforcement learning algorithms.
qlib: Qlib provides a strong infrastructure to support quant research.
rllib: open-source library for reinforcement learning.
stable-baselines3: is a set of reliable implementations of reinforcement learning algorithms in PyTorch.
Developers
You are welcome to contribute features, examples and documentation or issues.
You can run the software tests typing pytest
in the command line,
assuming that the folder \tests
is in the current working directory.
To refresh and build the documentation:
pytest tests/notebooks
sphinx-apidoc -f -o docs/source tradingenv
cd docs
make clean
make html
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for tradingenv-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 90dc445b962508691eb3a3e65157b0c3f14893791e90d2dec72f12565d6aee5f |
|
MD5 | da0471bd28ae0ced2957b5112e58ec32 |
|
BLAKE2b-256 | 25e62adb7daab7e5936f4c7f5a95655faf5961940792459c6b2af70bfaa15ea0 |