A simple trading system for backtesting Model Based RL strategies
Project description
DeepTrade
Deeptrade is a backtesting system and library designed to test and evaluate machine learning based strategies.
Getting Started
Prerequisites
DeepTrade relies on python 3.8 or higher and Pytorch 1.9.0 or higher.
We recommend using a conda environment to manage dependencies. You can create a new environment with the following command:
conda create --name deeptrade-env python=3.10
conda activate deeptrade-env
Installation
Standard Installation
[!WARNING] The project is on PyPI as
deeptrade-mbrl
.
pip install deeptrade-mbrl
Development Installation
If you want to modify the library, clone the repository and setup a development environment:
git clone https://github.com/AOS55/deeptrade.git
pip install -e .
Running Tests
To test the library, either run pytest
at root or specify test directories from root with:
python -m pytest tests/core
python -m pytest tests/instruments
Usage
The core idea of DeepTrade is to backtest machine learning trading strategies based on either synthetic or real data. Backtesting is split into 2 datasets, training data, available at the start of the theoretical trading period and backtest data used to evaluate the strategy which is where you started the strategy from. The following provides an overview of the basic components of the library, examples of various backtests are provided in the notebooks directory.
The train-backtest split is shown below:
The classical Markov Decision Process (MDP) is used to model the trading environment. The environment is defined by the following components:
-
Environment: the trading environment represents the world the agent interacts with, $p(s'|s, a)$. This is responsible for providing the agent with observations, rewards, and other information about the state of the environment. The environment is defined by the
gymnasium
interface. These include:SingleInstrument-v0
: A single instrument trading environment designed for a simple single asset portfolio.MultiInstrument-v0
: A multi-instrument trading environment designed to hold a multiple asset portfolio.
Each of the trading environments have the following key components:
- Market data: either generated synthetically or from a real dataset. Data is queried at time $t$ which is updated by a size
period
each time around the env-agent loop. - Account: represents the portfolio consisting of:
Margin
: the amount of cash available.Positions
: the quantity of the asset held.
The observation of the environment is a numpy array consisting of:
returns
, $r_{t-\tau:t}$ from the asset price, usually log returns overwindow
$\tau$.position
, position of the portfolio in the asset.margin
, the amount of cash available.
-
Agent: The agent, $\pi(a|s)$, is the decision maker that interacts with the environment. The agent is responsible for selecting actions based on observations from the environment. Model Based RL (MBRL) agents are provided along with classical systematic trading strategies. These include:
- MBRL agents
PETS
: Probabilistic Ensemble Trajectory Sampling from Chua et al. (2018).MBPO
: :construction: Model Based Policy Optimization from Janner et al. (2019). :construction:Dreamer
: :construction: Dream to Control from Hafner et al. (2019). :construction:
- Systematic agents
HoldAgent
: A simple buy and hold strategy.EWMACAgent
: Exponential Weighted Moving Average Crossover, momentum based trend following.BreakoutAgent
: Breakout strategy, based on the high and low of the previousn
periods.
- MBRL agents
The overall environment-agent loop is shown below:
Environment
The following is a basic example of how to instantiate an environment with deeptrade.env
:
import gymnasium as gym
import deeptrade.env
env = gym.make("SingleInstrument-v0")
obs, info = env.reset()
truncated, terminated = False, False
while not truncated or not terminated:
action = env.action_space.sample()
obs, reward, truncated, info = env.step(action)
print(f"Reward: {reward}")
Contributing
Please read the CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests.
Citing
If you use this project in your research, please consider citing it with:
@misc{deeptrade,
author = {DeepTrade},
title = {DeepTrade: A Model Based Reinforcement Learning System for Trading},
year = {2024},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com./AOS55/deeptrade}},
}
Disclaimer
DeepTrade is for educational and research purposes and should is used for live trading entirely at your own risk.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file deeptrade_mbrl-0.1.1.tar.gz
.
File metadata
- Download URL: deeptrade_mbrl-0.1.1.tar.gz
- Upload date:
- Size: 1.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 58cc6da73ade1565ea2da7e9e99e81d930c05355039e29e910b24005d1aea620 |
|
MD5 | 734b330f56f07f19200bf7b51cfb55f0 |
|
BLAKE2b-256 | b048ce733f27e1f42acd6a3300ba851adb6e75661ab576508e9f713f498f9b69 |
File details
Details for the file deeptrade_mbrl-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: deeptrade_mbrl-0.1.1-py3-none-any.whl
- Upload date:
- Size: 81.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a62379e0c2b4d9f1cb6244d1ab4353201c08e67702b2fe06ed1e7ac707f07117 |
|
MD5 | 1547355070758f9db3d1fedd3c10f3b9 |
|
BLAKE2b-256 | 556728b654820f76f99e1bc9a6fb2a13cd202fbf8f2af7ed14b21d74d42bff57 |