Skip to main content

Exploratory data analysis tools

Project description

Installation

To install the package,

pip install edamame

the edamame package works correctly inside a .ipynb file.

import edamame as eda

Why Edamame?

Edamame is born under the inspiration of the pandas-profiling and pycaret packages. The scope of edamame is to build friendly and helpful functions for handling the EDA (exploratory data analysis) step in a dataset studied and after that train and analyze a models battery for regression or classification problems.

Exploratory data analysis functions

You can find an example of the EDA that uses the edamame package in the eda_example.ipynb notebook.

Dimensions

a prettier version of the .shape method

eda.dimensions(data)

Parameters:

  • data: A pandas dataframe

The function displays the number of rows and columns of a pandas dataframe passed.

Describe distribution

eda.describe_distribution(data)

Parameters:

  • data: A pandas dataframe.

Passing a dataframe the function display the result of the .describe() method applied to a pandas dataframe, divided by quantitative/numerical and categorical/object columns.

Identify columns types

eda.identify_types(data)

Parameters:

  • data: A pandas dataframe.

Passing a dataframe the function display the result of the .dtypes method and returns a list with the name of the quantitative/numerical columns and a list with the name of the columns identified as "object" by pandas.

Convert numerical columns to categorical

eda.num_to_categorical(data, col: list[str])

Parameters:

  • data: A pandas dataframe.
  • col: A list of strings containing the names of columns to convert.

Passing a dataframe and a list with columns name, the function transforms the types of the columns into "object".

Missing data

eda.missing(data)

Parameters:

  • data: A pandas dataframe.

The function display the following elements:

  • A table with the percentage of NA record for every column.
  • A table with the percentage of 0 as a record for every column.
  • A table with the percentage of duplicate rows.
  • A list of lists that contains the name of the numerical columns with NA, the name of the categorical columns with NA and the name of the columns with 0 as a record.

Handling Missing values

eda.handling_missing(data, col: list[str], missing_val = np.nan, method: list[str] = [])

Parameters:

  • data: A pandas dataframe.
  • col: A list of the names of the dataframe columns to handle.
  • missing_val: The value that represents the NA in the columns passed. By default is equal to np.nan.
  • method: A list of the names of the methods (mean, median, most_frequent, drop) applied to the columns passed. By default, if nothing was indicated, the function applied the most_frequent method to all the columns passed. Indicating fewer methods than the names of the columns leads to an autocompletion with the most_frequent method.

Drop columns

eda.drop_columns(data, col: list[str]):

Parameters:

  • data: A pandas dataframe.
  • col: A list of strings containing the names of columns to drop.

Plot categorical variables

eda.plot_categorical(data, col: list[str])

Parameters:

  • data: A pandas dataframe
  • col: A list of string containing the names of columns to plot

The function returns a sequence of tables and plots. For every variables the plot_categorical produce an info table that contains the information about:

  • The number of not nan rows.
  • The number of unique values.
  • The name of the value with the major frequency.
  • The frequence of the top unique value.

By the side of the info table, you can see the top cardinalities table that shows the first ten values by order of frequency. In addition, the function returns a barplot of the cardinalities frequencies. The plot_categorical function raises the message too many unique values instead of the plot if the variable has more than 1000 unique values and removes the x-axis ticks if the variable has more than 50 unique values.

In the plot_categorical function, it's not mandatory to use pandas "object" type variables, but it's strictly recommended.

Plot numerical variables

eda.plot_numerical(data, col: list[str], bins: int = 50)

Parameters:

  • data: A pandas dataframe.
  • col: A list of string containing the names of columns to plot.
  • bins: Number of bins to use in the histogram plot.

Like the plot_categorical, the function returns a sequence of tables and plots. For every variables the plot_quantitative function produce an info table that contains the information about:

  • Count of rows not nan
  • Mean
  • Std
  • Min
  • 25%
  • 50%
  • 75%
  • Max
  • Number of unique values
  • Value of the skew

In addition, the function returns an histogram with an estimated density + a boxplot. In the plot_quantitative function, it's mandatory to pass numerical variables to plot the histogram and estimate the density of the distribution.

View cardinalities of variables

eda.view_cardinality(data, col: list[str])

Parameters:

  • data: A pandas dataframe.
  • col: A list of strings containing the names of columns for which we want to show the number of unique values.

The function especially helps study the cardinalities of the categorical variables. In case the variables present high cardinalities values. We need to reduce these values or drop the variable.

In addition, seeing low cardinalities values in numerical variables can be a clue for the necessity to convert a numerical variable into a categorical with the num_to_categorical function.

Modify the cardinalities of a variable

eda.modify_cardinality(data, col: list[str], threshold: list[int])

Parameters:

  • data: A pandas dataframe.
  • col: A list of strings containing the names of columns for which we want to modify the cardinalities.
  • threshold: A list of integer values containing the threshold values for every variable.

All the cardinalities that have a total count lower than the threshold indicated in the function are grouped into a new unique value called: Other.

Distribution study of a numerical variable

eda.num_variable_study(data, col:str, bins: int = 50, epsilon: float = 0.0001, theory: bool = False)

Parameters:

  • data: A pandas dataframe.
  • col: The name of the dataframe column to study.
  • bins: The number of bins used by the histograms. By default bins=50.
  • epsilon: A constant for handle non strictly positive variables. By default epsilon = 0.0001
  • theory: A boolean value for displaying insight into the transformations applied

The function displays the following transformations of the variable col passed:

  • $log(x)$
  • $\sqrt(x)$
  • $x^2$
  • Box-cox
  • $1/x$

If a variable with zeros or negative values is passed, the function shows results based on the original data transformed to be strictly positive.

  • In case of zeros, data is transformed as: $\begin{cases} x_i = \epsilon,& \text{if } x_i = 0\ x_i, & \text{otherwise} \end{cases}$.

  • In case of negative values, data are transformed as: $x_i = x_i + |min(x)|\cdot\epsilon$.

Pearson's correlation matrix

eda.correlation_pearson(data, threshold: float = 0.)

Parameters:

  • data: A pandas dataframe.
  • threshold: Only the correlation values higher than the threshold are shown in the matrix. A floating value set by default to 0.

Correlation matrix for categorical columns

eda.correlation_categorical(data)

Parameters:

  • data: A pandas dataframe.

The function performs the Chi-Square Test of Independence between categorical variables of the dataset.

$\phi_k$ Correlation matrix

eda.correlation_phik(data, theory: bool = False)

Parameters:

  • data: A pandas dataframe.
  • theory: A boolean value for displaying insight into the theory of the $\phi_k$ correlation index. By default is set to False.

Link to the paper

TODO

  • Finishing the documentation.
  • Fixing the methods in the regression class.
  • Add the xgboost model, PCA regression and other methods for studying the goodness of fit of the other models.
  • Add the classification part to the package.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

edamame-0.28.tar.gz (18.5 kB view details)

Uploaded Source

Built Distribution

edamame-0.28-py3-none-any.whl (17.1 kB view details)

Uploaded Python 3

File details

Details for the file edamame-0.28.tar.gz.

File metadata

  • Download URL: edamame-0.28.tar.gz
  • Upload date:
  • Size: 18.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.12

File hashes

Hashes for edamame-0.28.tar.gz
Algorithm Hash digest
SHA256 81bfa991dea022d98a58a4fb1152e05a23ca8b382d024eb093622c63118a1842
MD5 a056be0808076474b0ceef52030dd71c
BLAKE2b-256 e20b1d18b01cdb7b3abae4b3ff70e8c3e8f74131264af3476c462de794379dbd

See more details on using hashes here.

File details

Details for the file edamame-0.28-py3-none-any.whl.

File metadata

  • Download URL: edamame-0.28-py3-none-any.whl
  • Upload date:
  • Size: 17.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.12

File hashes

Hashes for edamame-0.28-py3-none-any.whl
Algorithm Hash digest
SHA256 c744d1466d49de88d4fd413aa9a88f0e5408009a56ac5637e6798213bbd5de06
MD5 30071b12977b5dd3c8db689a17454a9a
BLAKE2b-256 0a8709e61ca344b1c26fb97c91e206830fe2a27881df554c479578f6db8afb51

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page