Skip to main content

Cane - Categorical Attribute traNsformation Environment

Project description

Cane - Categorical Attribute traNsformation Environment

Downloads Downloads Downloads

CANE is a simpler but powerful preprocessing method for machine learning.

At the moment offers some preprocessing methods:

--> The Percentage Categorical Pruned (PCP) merges all least frequent levels (summing up to "perc" percent) into a single level as presented in (https://doi.org/10.1109/IJCNN.2019.8851888), which, for example, can be "Others" category. It can be useful when dealing with several amounts of categorical information (e.g., city data).

An example of this can be viewed by the following pdf:

View PDF.

Which the 1,000 highest frequency values (decreasing order) for the user city attribute for the TEST traffic data (which contains a total of 10,690 levels). For this attribute and when , PCP selects only the most frequent 688 levels (dashed vertical line) merging the other 10,002 infrequent levels into the "Others" label.

This method results in 689 binary inputs, which is much less than the 10690 binary inputs required by the standard one-hot transform (reduction of percentage points).

--> The Inverse Document Frequency (IDF) codifies the categorical levels into frequency values, where the closer to 0 means, the more frequent it is (https://ieeexplore.ieee.org/document/8710472).

--> Implementation of a simpler One-Hot-Encoding method.

--> Minmax and Standard scaler (based on sklearn functions) with column selection and multicore support. Also, it is possible to apply these transformations to specific columns only instead of the full dataset (follow the example). However it only works with numerical data (e.g., MSE, decision scores)

--> You can also provide a custom scaler version of your own! (check example)

--> Use IDF with spark dataframes

Future Function ideas:

MultiColumn scale (based on the implementation of IDF and PCP)

Scaling of IDF values (normalized IDF)

Installation

To install this package please run the following command

pip install cane

New

Version 2.4:

[x] - Fixed 1 Hot Encoding method (now returns integer values)
[x] - Fixed Examples and Documentation

Suggestions and feedback

Any feedback will be appreciated. For questions and other suggestions contact luis.matos@dsi.uminho.pt Found any bugs? Post Them on the github page of the project! (https://github.com/Metalkiler/Cane-Categorical-Attribute-traNsformation-Environment)

Thanks for the support!

Citation

To cite this module please use:

@article{MATOS2022100359,
	author = {Lu{\'\i}s Miguel Matos and Jo{\~a}o Azevedo and Arthur Matta and Andr{\'e} Pilastri and Paulo Cortez and Rui Mendes},
	doi = {https://doi.org/10.1016/j.simpa.2022.100359},
	issn = {2665-9638},
	journal = {Software Impacts},
	keywords = {Data preprocessing, CANE, Python programming language, Machine learning},
	pages = {100359},
	title = {Categorical Attribute traNsformation Environment (CANE): A python module for categorical to numeric data preprocessing},
	url = {https://www.sciencedirect.com/science/article/pii/S2665963822000720},
	year = {2022},
	bdsk-url-1 = {https://www.sciencedirect.com/science/article/pii/S2665963822000720},
	bdsk-url-2 = {https://doi.org/10.1016/j.simpa.2022.100359}}

Example

import pandas as pd
import cane
import timeit
import numpy as np
x = [k for s in ([k] * n for k, n in [('a', 70000), ('b', 50000), ('c', 30000), ('d', 10000), ('e', 1000)]) for k in s]
df = pd.DataFrame({f'x{i}' : x for i in range(1, 130)})

dataPCP = cane.pcp(df)  # uses the PCP method and only 1 core with perc == 0.05 for all columns
dataPCP = cane.pcp(df, n_coresJob=2)  # uses the PCP method and only 2 cores for all columns
dataPCP = cane.pcp(df, n_coresJob=2,disableLoadBar = False)  # With Progress Bar for all columns
dataPCP = cane.pcp(df, n_coresJob=2,disableLoadBar = False, columns_use = ["x1","x2"])  # With Progress Bar and specific columns



#dicionary with the transformed data
dataPCP = cane.pcp(df) 
dicionary = cane.PCPDictionary(dataset = dataPCP, columnsUse = dataPCP.columns,
                              targetColumn = None) #no target feature to avoid going into dictionary
print(dicionary)

dataIDF = cane.idf(df)  # uses the IDF method and only 1 core for all columns 
dataIDF = cane.idf(df, n_coresJob=2)  # uses the IDF method and only 2 core for all columns
dataIDF = cane.idf(df, n_coresJob=2,disableLoadBar = False)  # With Progress Bar for all columns
dataIDF = cane.idf(df, n_coresJob=2,disableLoadBar = False, columns_use = ["x1","x2"]) # specific columns
dataIDF = cane.idf_multicolumn(df, columns_use = ["x1","x2"])  # aplication of specific multicolumn setting IDF

idfDicionary = cane.idfDictionary(Original = df, Transformed = dataIDF, columns_use = ["x1","x2"]) #following the example above of the 2 columns
                                
                                
dataH = cane.one_hot(df)  # without a column prefixer
dataH2 = cane.one_hot(df, column_prefix='column')  # it will use the original column name prefix
# (useful for when dealing with id number columns)
dataH3 = cane.one_hot(df, column_prefix='customColName')  # it will use a custom prefix defined by
# the value of the column_prefix
dataH4 = cane.one_hot(df, column_prefix='column', n_coresJob=2)  # it will use the original column name prefix
# (useful for when dealing with id number columns)
# with 2 cores

dataH4 = cane.one_hot(df, column_prefix='column', n_coresJob=2
                      ,disableLoadBar = False)  # With Progress Bar Active with 2 cores

dataH4 = cane.one_hot(df, column_prefix='column', n_coresJob=2
                      ,disableLoadBar = False,columns_use = ["x1","x2"])  # With Progress Bar specific columns!



#specific example with multicolumn
x2 = [k for s in ([k] * n for k, n in [('a', 50),
                                       ('b', 10),
                                       ('c', 20),
                                       ('d', 15), 
                                       ('e', 5)]) for k in s]

x3 = [k for s in ([k] * n for k, n in [('a', 40),
                                       ('b', 20),
                                       ('c', 1),
                                       ('d', 1), 
                                       ('e', 38)]) for k in s]
df2 = pd.concat([pd.DataFrame({f'x{i}' : x2 for i in range(1, 3)}),pd.DataFrame({f'y{i}' : x3 for i in range(1, 3)})], axis=1)
dataPCP = cane.pcp(df2, n_coresJob=2,disableLoadBar = False)
print("normal PCP \n",dataPCP)
dataPCP2 = cane.pcp_multicolumn(df2, columns_use = ["x1","y1"])  # aplication of specific multicolumn setting PCP
print("multicolumn PCP \n",dataPCP2)

dataIDF = cane.idf(df2, n_coresJob=2,disableLoadBar = False, columns_use = ["x1","y1"]) # specific columns
print("normal idf \n",dataIDF)
dataIDF2 = cane.idf_multicolumn(df2, columns_use = ["x1","y1"])  # aplication of specific multicolumn setting IDF
print("multicolumn idf \n",dataIDF2)



#Time Measurement in 10 runs
print("Time Measurement in 10 runs (unicore)")
OT = timeit.timeit(lambda:cane.one_hot(df, column_prefix='column', n_coresJob=1),number = 10)
IT = timeit.timeit(lambda:cane.idf(df),number = 10)
PT = timeit.timeit(lambda:cane.pcp(df),number = 10)
print("One-Hot Time:",OT)
print("IDF Time:",IT)
print("PCP Time:",PT)

#Time Measurement in 10 runs (multicore)
print("Time Measurement in 10 runs (multicore)")
OTM = timeit.timeit(lambda:cane.one_hot(df, column_prefix='column', n_coresJob=10),number = 10)
ITM = timeit.timeit(lambda:cane.idf(df,n_coresJob=10),number = 10)
PTM = timeit.timeit(lambda:cane.pcp(df,n_coresJob=10),number = 10)
print("One-Hot Time Multicore:",OTM)
print("IDF Time Multicore:",ITM)
print("PCP Time Multicore:",PTM)





# IDF with pyspark configs
import cane
from pyspark.sql import SparkSession
#Create PySpark SparkSession
spark = SparkSession.builder.getOrCreate()
#Create PySpark DataFrame from Pandas
sparkDF=spark.createDataFrame(df)
cols = sparkDF.columns
DFIDF, idf = cane.spark_idf_multicolumn(sparkDF, cols)
print(DFIDF.show(20))
dataIDF = cane.idf(df)
#check if it is correct:
print(dataIDF.equals(DFIDF.toPandas())) #equals means correct for both pandas version and original


#PCP with pyspark configs
import cane
from pyspark.sql import SparkSession
#Create PySpark SparkSession
spark = SparkSession.builder.getOrCreate()
#Create PySpark DataFrame from Pandas
sparkDF=spark.createDataFrame(df)
cols = sparkDF.columns
DFPCP, pcp = cane.spark_pcp(sparkDF, cols, 0.05, "Others")
DFPCP.show(20)
#check if it is correct:
dataPCP = cane.pcp(df)
print(dataPCP.equals(DFPCP.toPandas())) #equals means correct for both pandas version and original

Scaler Example with cane

These examples present the usage of cane with the standard methods (standard scaler e min max scaler). Also, it is presented how to implement a custom scaler function of your own with cane!

#New Scaler Function 

dfNumbers = pd.DataFrame(np.random.randint(0,100000,size=(100000, 12)), columns=list('ABCDEFGHIJKL'))
cane.scale_data(dfNumbers, n_cores = 3, scaleFunc="min_max") # all columns using 3 cores
cane.scale_data(dfNumbers, column=["A","B"], n_cores = 3, scaleFunc="min_max") # scale specific columns
cane.scale_data(dfNumbers, column=["A","B"], n_cores = 3, scaleFunc="std") #standard Scaler



#####################Custom Function Example#######################

#This will be an example file you of your custom function (e.g., "functions.py")
import pandas as pd
import numpy as np
import cane 

def customFunc(val):
       return pd.DataFrame([round((i - 1) / 3, 2) for i in val],columns=[val.name + "_custom_scalled_function])



### This is will be your main script

from functions import *
# with a custom function to apply to data:
if __name__ == "__main__":
    dfNumbers = pd.DataFrame(np.random.randint(0,100000,size=(100000, 12)), columns=list('ABCDEFGHIJKL'))
    cane.scale_data(dfNumbers, column=["A","B"], n_cores = 3, scaleFunc="custom", customfunc = customFunc)
    

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cane-2.3.3.tar.gz (14.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cane-2.3.3-py3-none-any.whl (10.6 kB view details)

Uploaded Python 3

File details

Details for the file cane-2.3.3.tar.gz.

File metadata

  • Download URL: cane-2.3.3.tar.gz
  • Upload date:
  • Size: 14.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.12

File hashes

Hashes for cane-2.3.3.tar.gz
Algorithm Hash digest
SHA256 391b6654a9c5e741f0306c5ffd181f8a05aaabf070f676c7861a84b6a050d9f5
MD5 4015a84bf0bfcf7405315c9b67aa9311
BLAKE2b-256 bc0c0b5d7daa500e06189c08752ed229327a6da119bfeccc0f97f85353fc9fc4

See more details on using hashes here.

File details

Details for the file cane-2.3.3-py3-none-any.whl.

File metadata

  • Download URL: cane-2.3.3-py3-none-any.whl
  • Upload date:
  • Size: 10.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.12

File hashes

Hashes for cane-2.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 43a145644948953aa83b1528e37faf3e1f0d6aeba5cb49ab09f57d929de008dd
MD5 5c172b713e2297d7383113ec733ea17c
BLAKE2b-256 a86024a1d51ff7e94a7c59ae46d1b8401ca29df55d06c6e756195673ae9c7382

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page