Skip to main content

Backtest trading strategies or train reinforcement learning agents with and event-driven market simulator.

Project description

Logo

Documentation Software tests Coverage

python License

Introduction

Backtest trading strategies or train reinforcement learning agents with tradingenv, an event-driven market simulator that implements the OpenAI/gym protocol.

Installation

tradingenv supports Python 3.7 or newer versions. The following command line will install the latest software version.

pip install tradingenv

Notebooks, software tests and building the documentation require extra dependencies that can be installed with

pip install tradingenv[extra]

Examples

Reinforcement Learning - Lazy Initialisation

The package is built upon the industry-standard gym and therefore can be used in conjunction with popular reinforcement learning frameworks including rllib and stable-baselines3.

from tradingenv.env import TradingEnvXY
import yfinance

# Load data from Yahoo Finance.
tickers = yfinance.Tickers(['SPY', 'TLT', 'TBIL', '^IRX'])
data = tickers.history(period="12mo", progress=False)['Close'].tz_localize(None)
Y = data[['SPY', 'TLT']]
X = Y.rolling(12).mean() - Y.rolling(26).mean()

# Lazy initialization of the trading environment.
env = TradingEnvXY(X, Y)

# OpenAI/gym protocol. Run an episode in the environment.
# env can be passed to RL agents of ray/rllib, stable-baselines3 or ElegantRL for training.
obs = env.reset()
done = False
while not done:
    action = env.action_space.sample()
    obs, reward, done, info = env.step(action)

Reinforcement Learning - Custom Initialisation

Use custom initialisation to personalise the design of the environment, including the reward function, transaction costs, observation window and leverage.

env = TradingEnvXY(
    X=X,                      # Use moving averages crossover as features
    Y=Y,                      # to trade SPY and TLT ETFs.
    transformer='z-score',    # Features are standardised to N(0, 1).
    reward='logret',          # Reward is the log return of the portfolio at each step,
    cash=1000000,             # starting with $1M.
    spread=0.0002,            # Transaction costs include a 0.02% spread,
    markup=0.005,             # a 0.5% broker markup on deposit rate,
    fee=0.0002,               # a 0.02% dealing fee of traded notional
    fixed=1,                  # and a $1 fixed fee per trade.
    margin=0.02,              # Do not trade if trade size is smaller than 2% of the portfolio.
    rate=data['^IRX'] / 100,  # Rate used to compute the yield on idle cash and cost of leverage.
    latency=0,                # Trades are implemented with no latency
    steps_delay=1,            # but a delay of one day.
    window=1,                 # The observation is the current state of the market,
    clip=5.,                  # clipped between -5 and +5 standard deviations.
    max_long=1.5,             # The maximum long position is 150% of the portfolio,
    max_short=-1.,            # the maximum short position is 100% of the portfolio.
    calendar='NYSE',          # Use the NYSE calendar to schedule trading days.
)

Backtesting

Thanks to the event-driven design, tradingenv is agnostic with respect to the type and time-frequency of the events. This means that you can run simulations either using irregularly sampled trade and quotes data, daily closing prices, monthly economic data or alternative data. Financial instruments supported include stocks, ETF and futures.

class Portfolio6040(AbstractPolicy):
    """Implement logic of your investment strategy or RL agent here."""

    def act(self, state):
        """Invest 60% of the portfolio in SPY ETF and 40% in TLT ETF."""
        return [0.6, 0.4]

# Run the backtest.
track_record = env.backtest(
    policy=Portfolio6040(),
    risk_free=prices['TBIL'],
    benchmark=prices['SPY'],
)

# The track_record object stores the results of your backtest.
track_record.tearsheet()

track_record.fig_net_liquidation_value()

Relevant projects

  • btgym: is an OpenAI Gym-compatible environment for
  • backtrader backtesting/trading library, designed to provide gym-integrated framework for running reinforcement learning experiments in [close to] real world algorithmic trading environments.
  • gym: A toolkit for developing and comparing reinforcement learning algorithms.
  • qlib: Qlib provides a strong infrastructure to support quant research.
  • rllib: open-source library for reinforcement learning.
  • stable-baselines3: is a set of reliable implementations of reinforcement learning algorithms in PyTorch.

Developers

You are welcome to contribute features, examples and documentation or issues.

You can run the software tests typing pytest in the command line, assuming that the folder \tests is in the current working directory.

To refresh and build the documentation:

pytest tests/notebooks
sphinx-apidoc -f -o docs/source tradingenv
cd docs
make clean
make html

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tradingenv-0.1.2.tar.gz (118.1 kB view details)

Uploaded Source

Built Distribution

tradingenv-0.1.2-py3-none-any.whl (78.2 kB view details)

Uploaded Python 3

File details

Details for the file tradingenv-0.1.2.tar.gz.

File metadata

  • Download URL: tradingenv-0.1.2.tar.gz
  • Upload date:
  • Size: 118.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for tradingenv-0.1.2.tar.gz
Algorithm Hash digest
SHA256 edbfaf2a42be9fcb82ffec7fdb4f4ef53bea1d668211f3a5211ef0546543b2a0
MD5 ac772b5294085667c0dfa776ad0cd50c
BLAKE2b-256 5214cbbbadbed1f5fa2de48a9adbb3b8615c2f509501f1e075a573b412a16587

See more details on using hashes here.

File details

Details for the file tradingenv-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: tradingenv-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 78.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for tradingenv-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e5e71f5c1fbbdc4384bc567f2a3aa73c72cf36cc231e1a55af7d352e6235f5de
MD5 d6df740b1a9e002367c91ee18bcaafd6
BLAKE2b-256 21a9c8a0a6af6f1eee37c901339aa1e10107c6c5a96ea45e30b4acd97bb15fe5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page