Skip to main content

Library/framework for making predictions.

Project description

mydatapreprocessing

Python versions PyPI version Downloads Binder Language grade: Python Documentation Status License: MIT codecov

Load data from web link or local file (json, csv, Excel file, parquet, h5...), consolidate it (resample data, clean NaN values, do string embedding) derive new featurs via columns derivation and do preprocessing like standardization or smoothing. If you want to see how functions works, check it's docstrings - working examples with printed results are also in tests - visual.py.

Links

Repo on GitHub

Official readthedocs documentation

Installation

Python >=3.6 (Python 2 is not supported).

Install just with

pip install mydatapreprocessing

There are some libraries that not every user will be using (for some data inputs). If you want to be sure to have all libraries, you can download requirements_advanced.txt and then install advanced requirements with pip install -r requirements_advanced.txt.

Examples

You can use live jupyter demo on binder

import mydatapreprocessing as mdp

Load data

You can use

  • python formats (numpy.ndarray, pd.DataFrame, list, tuple, dict)
  • local files
  • web urls

You can load more data at once in list.

Syntax is always the same.

data = mdp.load_data.load_data(
    "https://www.ncdc.noaa.gov/cag/global/time-series/globe/land_ocean/ytd/12/1880-2016.json",
    request_datatype_suffix=".json",
    data_orientation="index",
    predicted_table="data",
)
# data2 = mdp.load_data.load_data([PATH_TO_FILE.csv, PATH_TO_FILE2.csv])

Consolidation

If you want to use data for some machine learning models, you will probably want to remove Nan values, convert string columns to numeric if possible, do encoding or keep only numeric data and resample.

data_consolidated = mdp.preprocessing.data_consolidation(
    data, predicted_column=0, remove_nans_threshold=0.9, remove_nans_or_replace="interpolate"
)

Feature engineering

Functions in feature_engineering and preprocessing expects that data are in form (n_samples, n_features). n_samples are ususally much bigger and therefore transformed in data_consolidation if necessary.

Extend original data with

data_extended = mdp.feature_engineering.add_derived_columns(data_consolidated, differences=True, rolling_means=32)

Preprocessing

preprocess_data returns preprocessed data, but also last undifferenced value and scaler for inverse transformation, so unpack it with _

data_preprocessed, _, _ = mdp.preprocessing.preprocess_data(
    data_extended,
    remove_outliers=3,
    smoothit=None,
    correlation_threshold=False,
    data_transform=False,
    standardizeit="standardize",
)

Creating inputs

Create models inputs with

seqs, Y, x_input, test_inputs = mdp.create_model_inputs.make_sequences(
    data_extended.values, predicts=7, repeatit=3, n_steps_in=6, n_steps_out=1, constant=1
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mydatapreprocessing-2.0.11.tar.gz (28.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mydatapreprocessing-2.0.11-py3-none-any.whl (31.4 kB view details)

Uploaded Python 3

File details

Details for the file mydatapreprocessing-2.0.11.tar.gz.

File metadata

  • Download URL: mydatapreprocessing-2.0.11.tar.gz
  • Upload date:
  • Size: 28.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.8.1 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for mydatapreprocessing-2.0.11.tar.gz
Algorithm Hash digest
SHA256 f360e34bdea66e6b815dad0cc47cb6e9162e4e3c02a2b53df923c96d42080628
MD5 1c2cd3d1c059fca516e82d0d4df19b4d
BLAKE2b-256 d863d4bed0b261eb628cccba1bbbadb44fee72490200df0db2962d621d626657

See more details on using hashes here.

File details

Details for the file mydatapreprocessing-2.0.11-py3-none-any.whl.

File metadata

  • Download URL: mydatapreprocessing-2.0.11-py3-none-any.whl
  • Upload date:
  • Size: 31.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.8.1 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for mydatapreprocessing-2.0.11-py3-none-any.whl
Algorithm Hash digest
SHA256 ccef128cf0ffbf13917cfe8e2af019894011b574afd60210de045e4ec1485d7e
MD5 b8dab15c62fd25a9b470d8e6877de8ba
BLAKE2b-256 92be7d24804ce2c1753e0fd77884f986553564c775dca52eec2075c7b9164f1c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page