Skip to main content

Library/framework for making predictions.

Project description

mydatapreprocessing

PyPI pyversions PyPI version Language grade: Python Build Status Documentation Status License: MIT codecov

Load data from web link or local file (json, csv, excel file, parquet, h5...), consolidate it and do preprocessing like resampling, standardization, string embedding, new columns derivation, feature extraction etc. based on configuration.

Library contain 3 modules.

Preprocessing

First - preprocessing load data, consolidate it and do the preprocessing. It contains functions like load_data, data_consolidation, preprocess_data, preprocess_data_inverse, add_frequency_columns, rolling_windows, add_derived_columns etc.

Example

import mydatapreprocessing.preprocessing as mdpp

data = "https://blockchain.info/unconfirmed-transactions?format=json"

# Load data from file or URL
data_loaded = mdpp.load_data(data, request_datatype_suffix=".json", predicted_table='txs')


#Some examples of other inputs to data_load function

# myarray_or_dataframe # Numpy array or Pandas.DataFrame
# r"/home/user/my.json" # Local file. The same with .parquet, .h5, .json or .xlsx. On windows it's necessary to use raw string - 'r' in front of string because of escape symbols \
# "https://yoururl/your.csv" # Web url (with suffix). Same with json.
# "https://blockchain.info/unconfirmed-transactions?format=json" # In this case you have to specify also 'request_datatype_suffix': "json", 'data_orientation': "index", 'predicted_table': 'txs',
# {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']} # Dict with colums or rows (index) - necessary to setup data_orientation!


# You can use more files in list and data will be concatenated. It can be list of paths or list of python objects. Example:

# [{'col_1': 3, 'col_2': 'a'}, {'col_1': 0, 'col_2': 'd'}]  # List of records
# [np.random.randn(20, 3), np.random.randn(25, 3)]  # Dataframe same way
# ["https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv", "https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv"]  # List of URLs
# ["path/to/my1.csv", "path/to/my1.csv"]


# Transform various data into defined format - pandas dataframe - convert to numeric if possible, keep
# only numeric data and resample ifg configured. It return array, dataframe
data_consolidated = mdpp.data_consolidation(
    data_loaded, predicted_column="weight", data_orientation="index", remove_nans_threshold=0.9, remove_nans_or_replace='interpolate')

# You can add some extra informations to the data that can help (beware it can slow down the machine learning model)
to_be_extended = np.array([[0, 2] * 64, [0, 0, 0, 5] * 32]).T
extended = mdpp.add_frequency_columns(to_be_extended, window=8)


to_be_extended2 = pd.DataFrame([range(30), range(30, 60)]).T
extended2 = mdpp.add_derived_columns(to_be_extended2, differences=True, second_differences=True, multiplications=True,
                                    rolling_means=True, rolling_stds=True, mean_distances=True, window=10)

# Feature extraction is under development  :[

# Preprocess data. It return preprocessed data, but also last undifferenced value and scaler for inverse
# transformation, so unpack it with _
data_preprocessed, _, _ = mdpp.preprocess_data(data_consolidated, remove_outliers=True, smoothit=False,
                                              correlation_threshold=False, data_transform=False, standardizeit='standardize')

Inputs

Second module is inputs. It take tabular time series data and put it into format (input vector X, output vector y and input for predicted value x_input) that can be inserted into machine learning models for example on sklearn or tensorflow. It contain functions make_sequences, create_inputs and create_tests_outputs

Example for n_steps_in = 3 and n_steps_out = 1

From [[1], [2], [3], [4], [5], [6]]

Inputs: [[1, 2, 3], [2, 3, 4], [3, 4, 5]] Outputs [[4], [5], [6]]

Also multivariate data can be used.

import mydatapreprocessing as mdp

data = np.array([[1, 2, 3, 4, 5, 6, 7, 8], [9, 10, 11, 12 ,13, 14 ,15, 16], [17 ,18 ,19, 20, 21, 22, 23, 24]]).T
X, y, x_input, _ = mdp.inputs.make_sequences(data, n_steps_in= 2, n_steps_out=3)

# This example create from such a array:

# data = array([[1, 9, 17],
#               [2, 10, 18],
#               [3, 11, 19],
#               [4, 12, 20],
#               [5, 13, 21],
#               [6, 14, 22],
#               [7, 15, 23],
#               [8, 16, 24]])

# Such a results (data are serialized).

# X = array([[1, 2, 3, 9, 10, 11, 17, 18, 19],
#            [2, 3, 4, 10, 11, 12, 18, 19, 20],
#            [3, 4, 5, 11, 12, 13, 19, 20, 21],
#            [4, 5, 6, 12, 13, 14, 20, 21, 22]])

# y = array([[4, 5],
#            [5, 6],
#            [6, 7],
#            [7, 8]]

# x_input = array([[ 6,  7,  8, 14, 15, 16, 22, 23, 24]])

Third module is generatedata. It generate some basic data like sin, ramp random. In the future, it will also import some real datasets for models KPI.

Example

import mydatapreprocessing as mdp

data = mdp.generatedata.gen_sin(1000)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mydatapreprocessing-1.1.18.tar.gz (24.7 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

mydatapreprocessing-1.1.18-py3.7.egg (54.6 kB view details)

Uploaded Egg

mydatapreprocessing-1.1.18-py3-none-any.whl (27.5 kB view details)

Uploaded Python 3

File details

Details for the file mydatapreprocessing-1.1.18.tar.gz.

File metadata

  • Download URL: mydatapreprocessing-1.1.18.tar.gz
  • Upload date:
  • Size: 24.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.7.1

File hashes

Hashes for mydatapreprocessing-1.1.18.tar.gz
Algorithm Hash digest
SHA256 4aa2c9bf299ae925ac6d8baeb5a5d0ff6e3790019e82ce0db7e10a6224523606
MD5 2f72dd592be94ffaadb8b6406240b47f
BLAKE2b-256 c2c3a6bdc2ec1452a9a27e39bda086ef75c7ec3953a4b55ec8e8b929262c8c71

See more details on using hashes here.

File details

Details for the file mydatapreprocessing-1.1.18-py3.7.egg.

File metadata

  • Download URL: mydatapreprocessing-1.1.18-py3.7.egg
  • Upload date:
  • Size: 54.6 kB
  • Tags: Egg
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.7.1

File hashes

Hashes for mydatapreprocessing-1.1.18-py3.7.egg
Algorithm Hash digest
SHA256 7700d3175e777b4e20e6d09934f72fe844dbddeae4c9fd1e560d2113e91614f1
MD5 092fb831e866b5a4827bb8d6b160f818
BLAKE2b-256 8bd45c0a516e8296d4ade7522c392803ea15bc001b1daca20f78373bd4334947

See more details on using hashes here.

File details

Details for the file mydatapreprocessing-1.1.18-py3-none-any.whl.

File metadata

  • Download URL: mydatapreprocessing-1.1.18-py3-none-any.whl
  • Upload date:
  • Size: 27.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.7.1

File hashes

Hashes for mydatapreprocessing-1.1.18-py3-none-any.whl
Algorithm Hash digest
SHA256 b5a86c7f089a5b7e50e72d3e382e33e0d6b7d3b37f84d9a765239cbf21a3db6e
MD5 775e3d43ce2cebffc4e6293b8f749986
BLAKE2b-256 3d2573879c734075731ff09ad4ccdd7cf467c27ec4f9a485b9fc888c54555d22

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page