A polars based preprocessor for ML datasets
Project description
Clearbox AI Preprocessor
This repository contains the continuation of the work presented in our series of blogposts "The whys and hows of data preparation" (part 1, part 2, part 3).
The new version of Preprocessor exploits Polars library's features to achieve blazing fast tabular data manipulation.
It is possible to input the Preprocessor a Pandas.DataFrame
or a Polars.LazyFrame
.
Preprocessing customization
A bunch of options are available to customize the preprocessing.
-
The
Preprocessor
class features the following input arguments, besides the input dataset:-
discarding_threshold: float (default = 0.9)
Float number between 0 and 1 to set the threshold for discarding categorical features. If more than discarding_threshold * 100 % of values in a categorical feature are different from each other, then the column is discarded. For example, if discarding_threshold=0.9, a column will be discarded if more than 90% of its values are unique.
-
get_discarded_info: bool (defatult = False)
When set to 'True', the preprocessor will feature the methods preprocessor.get_discarded_features_reason, which provides information on which columns were discarded and the reason why, and preprocessor.get_single_valued_columns, which provides the values of the single-valued discarded columns. Note that setting get_discarded_info=True will considerably slow down the processing operation! The list of discarded columns will be available even if get_discarded_info=False, so consider setting this flag to True only if you need to know why a column was discarded or, if it contained just one value, what that value was.
-
excluded_col: (default = [])
List containing the names of the columns to be excluded from processing. These columns will be returned in the final dataframe withouth being manipulated.
-
time: (default = None)
String name of the time column by which to sort the dataframe in case of time series.
-
-
The mothod
collect
of the class Preprocessor features the following input arguments, besides the input dataset:-
scaling: (default="normalize")
Specifies the scaling operation to perform on numerical features.
- "normalize" : applies normalization to numerical features
- "standardize" : applies standardization to numerical features
-
num_fill_null: (default = "mean")
Specifies the value to fill null values with or the strategy for filling null values in numerical features.
- value : fills null values with the specified value
- "mean" : fills null values with the average of the column
- "forward" : fills null values with the previous non-null value in the column
- "backward" : fills null values with the following non-null value in the column
- "min" : fills null values with the minimum value of the column
- "max" : fills null values with the maximum value of the column
- "zero" : fills null values with zeros
- "one" : fills null values with ones
-
n_bins: (default = 0)
Integer number that determines the number of bins into which numerical features are discretized. When set to 0, the preprocessing step defaults to the scaling method specified in the 'scaling' atgument instead of discretization.
Note that if n_bins is different than 0, discretization will take place instead of scaling, regardless of whether the 'scaling' argument is specified.
-
Timeseries
The Prperocessor also features a timeseries manipulation and feature extraction method called extract_ts_features()
.
This method takes as input:
- the preprocessed dataframe
- the target vector in the form of a
Pandas.Series
or aPolars.Series
- the name of the time column
- the name of the id column to group by
It returns the most relevant features selected among a wide range of features.
Installation
You can install the preprocessor by running the following command:
pip install clearbox_preprocessor
Usage
You can start using the Preprocessor by importing it and creating a Pandas.DataFrame
or a Polars.LazyFrame
:
import polars as pl
from clearbox_preprocessor import Preprocessor
q = pl.LazyFrame(
{
"cha": ["x", None, "z", "w", "x", "k"],
"int": [123, 124, 223, 431, 435, 432],
"dat": ["2023-1-5T00:34:12.000Z", "2023-2-3T04:31:45.000Z", "2023-2-4T04:31:45.000Z", None, "2023-5-12T21:41:58.000Z", "2023-6-1T17:52:22.000Z"],
"boo": [True, False, None, True, False, False],
"equ": ["a", "a", "a", "a", None, "a"],
"flo": [43.034, 343.1, 224.23, 75.3, None, 83.2],
"str": ["asd", "fgh", "fgh", "", None, "cvb"]
}
).with_columns(pl.col('dat').str.to_datetime("%Y-%m-%dT%H:%M:%S.000Z"))
q.collect()
At this point, you can initialize the Preprocessor by passing the LazyFrame or DataFrame created to it and then calling the collect()
method to materialize the processed dataframe.
Note that if no argument is specified beyond the dataframe q, the default settings are employed for preprocessing:
preprocessor = Preprocessor(q)
df = preprocessor.collect(q)
df
Customization example
In the following example, when the Preprocessor is initialized:
- the discarding threshold is lowered from 90% to 80% (a column will be discarded if more than 80% of its values are unique)
- the discarding featrues informations are stored in the
preprocessor
instance - the column "cha" is excluded from the preprocessing and is preserved unchanged.
When the method collect()
is called
- the scaling method of the numerical features chosen is standardization
- the fill null strategy for numerical features is "forward".
preprocessor = Preprocessor(q, get_discarded_info=True, discarding_threshold = 0.8, excluded_col = ["boo"])
df = preprocessor.collect(q, scaling = "standardize", num_fill_null = "forward")
df
If the Processor's argument get_discarded_info
is set to True
during initialization, it is possible to call the method get_discarded_features_reason()
to display the discarded features.
In the case of discarded single-valued columns, the value contained is also displayed and is available in a dictionary called single_value_columns
, stored in the Preprocessor instance, and can be used as metadata.
preprocessor.get_discarded_features_reason()
To do
- Implement unit tests
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file clearbox-preprocessor-0.9.4.tar.gz
.
File metadata
- Download URL: clearbox-preprocessor-0.9.4.tar.gz
- Upload date:
- Size: 13.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b7322eb1e8208589648805d04b2b929b2e85b7822729eb968ea685f969e4a6b7 |
|
MD5 | 6f29b1ce22a7fded47d3c6fa2e4d7021 |
|
BLAKE2b-256 | 567faf5a3907298882932a663a3e58b1189377e4db512b969b148bb541c50fe5 |
File details
Details for the file clearbox_preprocessor-0.9.4-py3-none-any.whl
.
File metadata
- Download URL: clearbox_preprocessor-0.9.4-py3-none-any.whl
- Upload date:
- Size: 12.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 17ea59334fcbefaeb8ff97c933f7d825a343077437ef9173c9f6f012820a6113 |
|
MD5 | d227e8b7dfeb9dc18891018e53147c7b |
|
BLAKE2b-256 | 47b1f1b702be8ea31234ce53c7f6fb899148bb4b454885f0e1f5120d4dde5623 |