Skip to main content

No project description provided

Project description

# DataScienceTools

This repository contains some modules I consider useful in Data Science tasks. Most of them are based on sklearn and require input data to be in pandas DataFrames or Series.

## Table of contents

1. [ Preprocessing ](#preprocessing) <br> 1.1 [ NumScaler ](#numscaler) <br> 1.2 [ Imputer ](#imputer) <br> 1.3 [ FeatureConverter ](#featureconverter) <br> 1.4 [ OneHotEncoder ](#onehotencoder) <br> 1.5 [ Bucketizer ](#bucketizer) <br>

2. [ Classifier ](#classifier) <br> 2.1 [ BestGuessClassifier ](#bestguessclassifier) <br> 2.2 [ BestGuessRegressor ](#bestguessregressor) <br>

<a name=”preprocessing”></a> ## 1. Preprocessing

These modules are meant to help with preprocessing tasks. They are built upon sklearn.base.Transformermixin and can be used in sklearn.Pipeline. Possible tasks are scaling, imputing …

<a name=”numscaler”></a> ### 1.1 NumScaler

NumScaler is a wrapper for scalers. Some scalers only take numerical input and can’t ignore non numerical data. NumScaler identifies numerical data and passes it to a Scaler (default=sklearn.preprocessing.StandardScaler).

<a name=”imputer”></a> ### 1.2 Imputer

Imputes missing values based on ‘sklearn.base.TransformerMixin’. Numerics values can be imputed by a given metric (default=np.mean) or by a constant value when passed in manual_values. Non numeric values can be imputed by the most frequent value or by a constant value when passed in manual_values.

<a name=”featureconverter”></a> ### 1.3 FeatureConverter

The FeatureConverter helps to integrate common preprocessing steps into sklearn pipelines. Supported preprocessing steps are replacing values, converting to str, int or float, dropping columns or adding flags. Steps will be performed in the following order: create flags, replace values, convert types, drop columns

<a name=”onehotencoder”></a> ### 1.4 OneHotEncoder

This module performs binary encoding on columns containing categorical data. It assumes that all non numeric columns contain categorical data. If categorical data is encoded in numeric columns, use dstools.preprocessing.FeatureConverter to convert these values first. The maximum number of encoded values can be given globally or fine tuned for every column. Values that exceed the maximum number of encoded values are aggregated in a REST class. Missing values can be either put in the REST class or be classified as distinct value. Categrocial columns with exactly two values (including missing values) can be encoded into one column to reduce dimensionality.

<a name=”bucketizer”></a> ### 1.5 Bucketizer

The Bucketizer puts numeric features into bins. The binned feature can either replace the original feature or can be created additionally. You can bin all numeric features, pass a list of the features to be binned or pass prefix of the features to be binned.

<a name=”classifier”></a> ## 2. Classifier

some text

<a name=”bestguessclassifier”></a> ### 2.1 BestGuessClassifier

The BestGuessClassifier creates a constant numeric best guess for a given metric, e.g. mean absolute squared error. This Classifier is for interval scaled numeric dependent variables. Use this classifier for binnend variables, otherwise use dstools.regressors.BestGuessRegressor.

<a name=”bestguessregressor”></a> ### 2.2 BestGuessRegressor

The BestGuessRegressor creates a constant best guess for a given metric, e.g. mean absolute squared error. This regressor is for interval scaled numeric dependent variables.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

DataScienceTools-0.1.1.dev0-py3-none-any.whl (14.9 kB view details)

Uploaded Python 3

File details

Details for the file DataScienceTools-0.1.1.dev0-py3-none-any.whl.

File metadata

  • Download URL: DataScienceTools-0.1.1.dev0-py3-none-any.whl
  • Upload date:
  • Size: 14.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/41.6.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.7.3

File hashes

Hashes for DataScienceTools-0.1.1.dev0-py3-none-any.whl
Algorithm Hash digest
SHA256 b3145c6476ef5154c1c1ea1bb0ba123b7de3bae2a6aecabd1c3c66caeca67a4b
MD5 163291c676cebf00db25a2b4ae12bc5e
BLAKE2b-256 4321a724f8402e960d2980ad7bfdf087693e8de2298c44a363cfb3c16f5f7bbc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page