Skip to main content

Library/framework for making predictions.

Project description

mydatapreprocessing

PyPI pyversions PyPI version Language grade: Python Build Status Documentation Status License: MIT codecov

Load data from web link or local file (json, csv, excel file, parquet, h5...), consolidate it and do preprocessing like resampling, standardization, string embedding, new columns derivation, feature extraction etc. based on configuration.

Library contain 3 modules.

First - preprocessing load data and preprocess it. It contains functions like load_data, data_consolidation, preprocess_data, preprocess_data_inverse, add_frequency_columns, rolling_windows, add_derived_columns etc.

Example

data = "https://blockchain.info/unconfirmed-transactions?format=json"

# Load data from file or URL
data_loaded = mdp.load_data(data, request_datatype_suffix=".json", predicted_table='txs')

# Transform various data into defined format - pandas dataframe - convert to numeric if possible, keep
# only numeric data and resample ifg configured. It return array, dataframe
data_consolidated = mdp.data_consolidation(
    data_loaded, predicted_column="weight", data_orientation="index", remove_nans_threshold=0.9, remove_nans_or_replace='interpolate')

# Preprocess data. It return preprocessed data, but also last undifferenced value and scaler for inverse
# transformation, so unpack it with _
data_preprocessed, _, _ = mdp.preprocess_data(data_consolidated, remove_outliers=True, smoothit=False,
                                              correlation_threshold=False, data_transform=False, standardizeit='standardize')

Allowed data formats for load_data are examples

# myarray_or_dataframe # Numpy array or Pandas.DataFrame
# r"/home/user/my.json" # Local file. The same with .parquet, .h5, .json or .xlsx. On windows it's necessary to use raw string - 'r' in front of string because of escape symbols \
# "https://yoururl/your.csv" # Web url (with suffix). Same with json.
# "https://blockchain.info/unconfirmed-transactions?format=json" # In this case you have to specify also 'request_datatype_suffix': "json", 'data_orientation': "index", 'predicted_table': 'txs',
# [{'col_1': 3, 'col_2': 'a'}, {'col_1': 0, 'col_2': 'd'}] # List of records
# {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']} # Dict with colums or rows (index) - necessary to setup data_orientation!

Second module is inputs. It take tabular time series data and put it into format that can be inserted into machine learning models for example on sklearn or tensorflow. It contain functions make_sequences, create_inputs and create_tests_outputs

Third module is generatedata. It generate some basic data like sin, ramp random. In the future, it will also import some real datasets for models KPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mydatapreprocessing-1.0.10.tar.gz (18.8 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

mydatapreprocessing-1.0.10-py3.7.egg (42.6 kB view details)

Uploaded Egg

mydatapreprocessing-1.0.10-py3-none-any.whl (21.6 kB view details)

Uploaded Python 3

File details

Details for the file mydatapreprocessing-1.0.10.tar.gz.

File metadata

  • Download URL: mydatapreprocessing-1.0.10.tar.gz
  • Upload date:
  • Size: 18.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.7.1

File hashes

Hashes for mydatapreprocessing-1.0.10.tar.gz
Algorithm Hash digest
SHA256 417abb80617f095be46c1380f8b8ae57bb6877c95a283638a452df5efad2df40
MD5 095db536d1176a4562c6b93c897a9e24
BLAKE2b-256 01d107aea6ceacff1c9d27f24e980e5d88375201c65f01185c4bb927a0264298

See more details on using hashes here.

File details

Details for the file mydatapreprocessing-1.0.10-py3.7.egg.

File metadata

  • Download URL: mydatapreprocessing-1.0.10-py3.7.egg
  • Upload date:
  • Size: 42.6 kB
  • Tags: Egg
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.7.1

File hashes

Hashes for mydatapreprocessing-1.0.10-py3.7.egg
Algorithm Hash digest
SHA256 04bb7ef00eefde28fbde660e648fd9a3c58f53d1dfff4ae92c80f0f58eb57b41
MD5 ad965892e7db8c4d387d5326b626efe0
BLAKE2b-256 38b80fac15958247635c18860ccabf2f5c813e54848cc939864aca36a756d4bd

See more details on using hashes here.

File details

Details for the file mydatapreprocessing-1.0.10-py3-none-any.whl.

File metadata

  • Download URL: mydatapreprocessing-1.0.10-py3-none-any.whl
  • Upload date:
  • Size: 21.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.7.1

File hashes

Hashes for mydatapreprocessing-1.0.10-py3-none-any.whl
Algorithm Hash digest
SHA256 a397e4c94678bac2bf0628aec3aeef6c38ea4133285afd6c5b421b2797da58a1
MD5 95319bfc9d902ce73076a4dfc561a6fd
BLAKE2b-256 94be6a121b7cf752a885add7c75b47170519d91efcee0e9090353a1d6a0dc530

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page