Library/framework for making predictions.
Project description
predictit
Library/framework for making predictions. Choose best of 20 models (ARIMA, regressions, LSTM...) from libraries like statsmodels, scikit-learn, tensorflow and some own models. There are hundreds of customizable options (it's not necessary of course) as well as some config presets.
Library contain model hyperparameters optimization as well as option variable optimization. That means, that library can find optimal preprocessing (smoothing, dropping non correlated columns, standardization) and on top of that it can find optimal models inner parameters such as number of neuron layers.
Output
Most common output is plotly interactive graph, numpy array of results or deploying to database.
Return type of main predict function depends on configation.py
. It can return best prediction as array or all predictions as dataframe. Interactive html plot is also created.
Oficial repo and documentation links
Official readthedocs documentation
Installation
Python >=3.6. Python 2 is not supported. Install just with
pip install predictit
Sometime you can have issues with installing some libraries from requirements (e.g. numpy because not BLAS / LAPACK). There are also two libraries - Tensorflow and pyodbc not in requirements, because not necessary, but troublesome. If library not installed with pip, check which library don't work, install manually with stackoverflow and repeat...
How to
Software can be used in three ways. As a python library or with command line arguments or as normal python scripts.
Main function is predict in main.py
script.
There is also predict_multiple_columns if you want to predict more at once (columns or time frequentions) and also compare_models function that evaluate defined test data and can tell you which models are best. Then you can use only such a models.
Simple example of using predictit as a python library and function arguments
import predictit
import numpy as np
predictions = predictit.main.predict(data=np.random.randn(1, 100), predicts=3, plotit=1)
Simple example of using as a python library and editing config
import predictit
from predictit.configuration import config
# You can edit config in two ways
config.data_source = 'csv'
config.csv_full_path = 'https://datahub.io/core/global-temp/r/monthly.csv' # You can use local path on pc as well... "/home/dan/..."
config.predicted_column = 'Mean'
config.datetime_index = 'Date'
# Or
config.update({
'predicts': 3,
'default_n_steps_in': 15
})
predictions = predictit.main.predict()
Simple example of using main.py
as a script
Open configuration.py
(only script you need to edit (very simple)), do the setup. Mainly used_function and data or data_source and path. Then just run main.py
.
Simple example of using command line arguments
Run code below in terminal in predictit folder.
Use python main.py --help
for more parameters info.
python main.py --used_function predict --data_source 'csv' --csv_full_path 'https://datahub.io/core/global-temp/r/monthly.csv' --predicted_column "'Mean'"
Explore config
To see all the possible values in configuration.py
from your IDE, use
predictit.configuration.print_config()
Example of compare_models function
import predictit
from predictit.configuration import config
my_data_array = np.random.randn(2000, 4) # Define your data here
# You can compare it on same data in various parts or on different data (check configuration on how to insert dictionary with data names)
config.update({
'data_all': (my_data_array[-2000:], my_data_array[-1500:], my_data_array[-1000:])
})
predictit.main.compare_models()
Example of predict_multiple function
import predictit
from predictit.configuration import config
config.data = pd.read_csv("https://datahub.io/core/global-temp/r/monthly.csv")
# Define list of columns or '*' for predicting all of the columns
config.predicted_columns = ['*']
predictit.main.predict_multiple_columns()
Example of config variable optimization
config.update({
'data_source': 'csv',
'csv_full_path': "https://datahub.io/core/global-temp/r/monthly.csv",
'predicted_column': 'Mean',
'return_type': 'all_dataframe',
'optimization': 1,
'optimization_variable': 'default_n_steps_in',
'optimization_values': [12, 20, 40],
'plot_all_optimized_models': 1,
'print_detailed_result': 1
})
predictions = predictit.main.predict()
Hyperparameters tuning
To optmize hyperparameters, just set optimizeit: 1,
and model parameters limits. It is commented in config.py
how to use it. It's not grid bruteforce. Heuristic method based on halving interval is used, but still it can be time consuming. It is recomend only to tune parameters worth of it. Or tune it by parts.
GUI
It is possible to use basic GUI. But only with CSV data source.
Just run gui_start.py
if you have downloaded software or call predictit.gui_start.run_gui()
if you are importing via PyPI.
Example of using library as a pro with deeper editting config
import predictit
from predictit.configuration import config
config.update({
'data_source': 'test', # Data source. ('csv' or 'sql' or 'test')
'csv_full_path': r'C:\Users\truton\ownCloud\Github\predictit_library\predictit\test_data\5000 Sales Records.csv', # Full CSV path with suffix
'predicted_column': '', # Column name that we want to predict
'predicts': 7, # Number of predicted values - 7 by default
'print_number_of_models': 6, # Visualize 6 best models
'repeatit': 50, # Repeat calculation times on shifted data to evaluate error criterion
'other_columns': 0, # Whether use other columns or not
'debug': 1, # Whether print details and warnings
# Chose models that will be computed - remove if you want to use all the models
'used_models': {
"AR (Autoregression)": predictit.models.statsmodels_autoregressive,
"ARIMA (Autoregression integrated moving average)": predictit.models.statsmodels_autoregressive,
"Autoregressive Linear neural unit": predictit.models.autoreg_LNU,
"Conjugate gradient": predictit.models.conjugate_gradient,
"Sklearn regression": predictit.models.sklearn_regression,
},
# Define parameters of models
'models_parameters': {
'AR (Autoregression)': {'used_model': 'ar', 'method': 'cmle', 'ic': 'aic', 'trend': 'nc', 'solver': 'lbfgs'},
'ARIMA (Autoregression integrated moving average)': {'used_model': 'arima', 'p': 6, 'd': 0, 'q': 0, 'method': 'css', 'ic': 'aic', 'trend': 'nc', 'solver': 'nm'},
'Autoregressive Linear neural unit': {'mi_multiple': 1, 'mi_linspace': (1e-5, 1e-4, 3), 'epochs': 10, 'w_predict': 0, 'minormit': 0},
'Conjugate gradient': {'epochs': 200},
'Bayes ridge regression': {'regressor': 'bayesianridge', 'n_iter': 300, 'alpha_1': 1.e-6, 'alpha_2': 1.e-6, 'lambda_1': 1.e-6, 'lambda_2': 1.e-6},
'Sklearn regression': {'regressor': 'linear', 'alpha': 0.0001, 'n_iter': 100, 'epsilon': 1.35, 'alphas': [0.1, 0.5, 1], 'gcv_mode': 'auto', 'solver': 'auto', 'alpha_1': 1.e-6,
'alpha_2': 1.e-6, 'lambda_1': 1.e-6, 'lambda_2': 1.e-6, 'n_hidden': 20, 'rbf_width': 0, 'activation_func': 'selu'}, }
})
predictions = predictit.main.predict()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file predictit-1.42.tar.gz
.
File metadata
- Download URL: predictit-1.42.tar.gz
- Upload date:
- Size: 49.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/44.0.0 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.6.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cc2209a64cb8592779cd6eb7bdab54d3f6d2fad42ad8a344b191512a93f22a2e |
|
MD5 | e76e726acf001407055db8a42265d9a4 |
|
BLAKE2b-256 | 1c089b3e43840a1b0e4f332ea836bc06139429265a7dd8c788d45c9e72f7c248 |
File details
Details for the file predictit-1.42-py3-none-any.whl
.
File metadata
- Download URL: predictit-1.42-py3-none-any.whl
- Upload date:
- Size: 56.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/44.0.0 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.6.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c0e33f2c41d41b0fdbca1d64a01b026cbfb81f5d26775735166b747c85e41925 |
|
MD5 | 1eb86b17311a5eef7bb3b2a563af9ccf |
|
BLAKE2b-256 | 16232551b5a54cb8c03af099dab480cc6cef0481ec628913eecde8d54d150736 |