Library/framework for making predictions.
Project description
predictit
Library/framework for making predictions. Choose best of 20 models (ARIMA, regressions, LSTM...) from libraries like statsmodels, sci-kit, tensorflow and some own models. Library also automatically preprocess data and chose optimal parameters of predictions.
Output
Most common output is plotly interactive graph, numpy array of results or deploying to database.
It will also print the table of models errors.
Oficial repo and documentation links
Official readthedocs documentation
Installation
pip install predictit
Sometime you can have issues with installing some libraries from requirements (e.g. numpy because not BLAS / LAPACK). There are also two libraries - Tensorflow and pyodbc not in requirements, because not necessary, but troublesome. If library not installed with pip, check which library don't work, install manually with stackoverflow and repeat...
How to
Software can be used in three ways. As a python library or with command line arguments or as normal python scripts. Main function is predict in main.py script. There is also predict_multiple_columns if you want to predict more at once (columns or time frequentions) and also compare_models function that evaluate test data and can tell you which models are best. Then you can use only such a models. It's recommended also to use arguments optimization just once, change initial parameters in config and turn optimization off for performance reasons.
Command line arguments as well as functions arguments overwrite default config.py values. Not all the config options are in function arguments or command line arguments.
Simple example of predict function with Pypi and function arguments
import predictit
import numpy as np
predictions = predictit.main.predict(np.random.randn(1, 100), predicts=3, plot=1)
Simple example of using main.py script
Open config.py (only script you need to edit (very simple)), do the setup. Mainly used_function and data or data_source and path. Then just run main.py.
Simple example of using command line arguments
Run code below in terminal in predictit folder and change csv path (test data are not included in library because of size!). Use main.py --help for more parameters info.
python main.py --function predict --data_source 'csv' --csv_path 'test_data/daily-minimum-temperatures.csv' --predicted_column 1
Example of using as a library as a pro with editting config.py
import predictit
config = predictit.config.config
config.update({
'predicts': 7, # Create 12 predictions
'data_source': 'test', # Define that we load data from CSV
'datalength': 1000, # Consider only last 1000 data points
'predicted_column': 'Temp', # Column name that we want to predict
'compareit': 6, # Visualize 6 best models
'repeatit': 30, # Repeat calculation times on shifted data to evaluate error criterion
'other_columns': 0, # Whether use other columns or not
'debug': 1, # Whether use other columns or not
# Chose models that will be computed
'used_models': {
"AR (Autoregression)": predictit.models.statsmodels_autoregressive,
"ARIMA (Autoregression integrated moving average)": predictit.models.statsmodels_autoregressive,
"Autoregressive Linear neural unit": predictit.models.autoreg_LNU,
"Conjugate gradient": predictit.models.conjugate_gradient,
"Sklearn regression": predictit.models.sklearn_regression,
},
# Define parameters of models
'n_steps_in': 20, # How many lagged values in models
'output_shape': 'batch', # Whether batch or one-step models
'models_parameters': {
'AR (Autoregression)': {'model': 'ar', 'method': 'cmle', 'ic': 'aic', 'trend': 'nc', 'solver': 'lbfgs'},
'ARIMA (Autoregression integrated moving average)': {'model': 'arima', 'p': 3, 'd': 0, 'q': 0, 'method': 'css', 'ic': 'aic', 'trend': 'nc', 'solver': 'nm'},
'Autoregressive Linear neural unit': {'plot': 0, 'mi': 1, 'mi_multiple': 1, 'epochs': 20, 'w_predict': 0, 'minormit': 1, 'damping': 1},
'Conjugate gradient': {'epochs': 5},
'Bayes ridge regression': {'regressor': 'bayesianridge', 'n_iter': 300, 'alpha_1': 1.e-6, 'alpha_2': 1.e-6, 'lambda_1': 1.e-6, 'lambda_2': 1.e-6},
'Sklearn regression': {'regressor': 'linear', 'alpha': 0.0001, 'n_iter': 100, 'epsilon': 1.35, 'alphas': [0.1, 0.5, 1], 'gcv_mode': 'auto', 'solver': 'auto', 'alpha_1': 1.e-6, 'alpha_2': 1.e-6, 'lambda_1': 1.e-6, 'lambda_2': 1.e-6, 'n_hidden': 20, 'rbf_width': 0, 'activation_func': 'selu'},
}
})
predictions = predictit.main.predict()
Or if you downloaded it from github and not via pypi, just edit config as you need and run main.py
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file predictit-1.22.tar.gz
.
File metadata
- Download URL: predictit-1.22.tar.gz
- Upload date:
- Size: 42.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.6.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6a153bed675576610c225bc1ba2ae169760789d3bc26b156fbad89ef0078fc1c |
|
MD5 | 3d1325baca1796769e2c0663a64130e0 |
|
BLAKE2b-256 | 494b0c91ea6ada888d9e74952fcab36c611e2ab7d5b43788d1efbb79c9fe89f7 |
File details
Details for the file predictit-1.22-py3-none-any.whl
.
File metadata
- Download URL: predictit-1.22-py3-none-any.whl
- Upload date:
- Size: 51.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.6.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ca6b67a7de6c088a762e3090668ed7f147e9823285309544ccc16a19ea6b23ca |
|
MD5 | 4e892552422cd6f30a0e835df59a49ac |
|
BLAKE2b-256 | 8a23b3aff85ab733d3e06c87b297d1268c1d62f851c5f808986cd8d395daf20b |