Skip to main content

A Python package for univariate ,bivariate and multivariate data analysis using PySpark

Project description

pyspark_eda

pyspark_eda is a Python library for performing exploratory data analysis (EDA) using PySpark. It offers functionalities for both univariate, bivariate analysis and multivariate analysis, handling missing values, outliers, and visualizing data distributions.

Features

  • Univariate analysis: Analyze numerical and categorical columns individually. Displays histogram and frequency distribution table if required.
  • Bivariate analysis: Includes correlation, Cramer's V, and ANOVA. Displays scatter plot if required.
  • Multivariate analysis: Includes Variance Inflation Factor (VIF).
  • Automatic handling: Deals with missing values and outliers seamlessly.
  • Visualization: Provides graphical representation of data distributions and relationships.

Installation

You can install pyspark_eda via pip:

pip install pyspark_eda

Function

Univariate Analysis

Parameters

  • df (DataFrame): The input PySpark DataFrame.
  • table_name (str): The base table name to save the results
  • numerical_columns (list): The numerical columns of the table on which you want the analysis to be performed.
  • categorical_columns (list): The categorical columns of the table on which you want the analysis to be performed.
  • id_list (list, optional): List of columns to exclude from analysis.
  • print_graphs (int, optional): Whether to print graphs (1 for yes, 0 for no),default value is 0.

Description

Performs univariate analysis on the DataFrame and prints summary statistics and visualizations. It returns a table with the following columns : column , total_count, min, max, mean , mode, null_percentage, skewness , kurtosis, stddev ( which is the standard deviation), q1,q2 q3 (quartiles), mean_plus_3std, mean_minus_3std, outlier_percentage and frequency_distribution. You can display the table to view the results.

Example Usage

get_univariate_analysis

from pyspark.sql import SparkSession
from pyspark_eda import get_univariate_analysis

# Initialize Spark session
spark = SparkSession.builder.appName('DataAnalysis').getOrCreate()

# Load your data into a PySpark DataFrame
df = spark.read.csv('your_data.csv', header=True, inferSchema=True)

# Identify numerical and categorical columns
numerical_columns = ['col1', 'col2', 'col3']
categorical_columns = ['col4', 'col5', 'col6']

# Perform univariate analysis
get_univariate_analysis(df, table_name="your_table_name", numerical_columns=numerical_columns, categorical_columns=categorical_columns, id_list=['id_column'], print_graphs=1)

Function

Bivariate Analysis

Parameters

  • df (DataFrame): The input PySpark DataFrame.
  • table_name (str): The base table name to save the results
  • numerical_columns (list): The numerical columns of the table on which you want the analysis to be performed.
  • categorical_columns (list): The categorical columns of the table on which you want the analysis to be performed.
  • id_columns (list, optional): List of columns to exclude from analysis.
  • p_correlation_analysis (int, optional): Whether to perform Pearson's correlation analysis (1 for yes, 0 for no),default value is 0.
  • s_correlation_analysis (int, optional): Whether to perform Spearman's correlation analysis (1 for yes, 0 for no),default value is 0.
  • cramer_analysis (int, optional): Whether to perform Cramer's V analysis (1 for yes, 0 for no), default value is 0.
  • anova_analysis (int, optional): Whether to perform ANOVA analysis (1 for yes, 0 for no),default value is 0.
  • print_graphs (int, optional): Whether to print graphs (1 for yes, 0 for no),default value is 0.

Description

Performs bivariate analysis on the DataFrame, including Pearsons Correlation,Spearmans Correlation, Cramer's V, and ANOVA. It returns a table with the following columns: Column_1, Column_2, Pearson_Correlation,Spearman_Correlation, Cramers_V, Anova_F_Value,Anova_P_Value. You can display the table to view the results.

Example Usage

get_bivariate_analysis

from pyspark.sql import SparkSession
from pyspark_eda import get_bivariate_analysis

# Initialize Spark session
spark = SparkSession.builder.appName('DataAnalysis').getOrCreate()

# Load your data into a PySpark DataFrame
df = spark.read.csv('your_data.csv', header=True, inferSchema=True)

# Identify numerical and categorical columns
numerical_columns = ['col1', 'col2', 'col3']
categorical_columns = ['col4', 'col5', 'col6']

# Perform bivariate analysis
get_bivariate_analysis(df, table_name="bivariate_analysis_results", numerical_columns=numerical_columns, categorical_columns=categorical_columns, id_columns=['id_column'], p_correlation_analysis=1,s_correlation_analysis=1, cramer_analysis=1, anova_analysis=1, print_graphs=0)

Function

Multivariate Analysis

Parameters

  • df (DataFrame): The input PySpark DataFrame.
  • table_name (str): The base table name to save the results
  • numerical_columns (list): The numerical columns of the table on which you want the analysis to be performed.
  • id_columns (list, optional): List of columns to exclude from analysis.

Description

Performs multivariate analysis on the DataFrame, which gives the Variance Inflation Factor (VIF) for each numerical column. It returns a table with the following columns: Feature, VIF. You can display the table to view the results.

Example Usage

get_multivariate_analysis

from pyspark.sql import SparkSession
from pyspark_eda import get_bivariate_analysis

# Initialize Spark session
spark = SparkSession.builder.appName('DataAnalysis').getOrCreate()

# Load your data into a PySpark DataFrame
df = spark.read.csv('your_data.csv', header=True, inferSchema=True)

# Identify numerical columns
numerical_columns = ['col1', 'col2', 'col3']

# Perform bivariate analysis
get_multivariate_analysis(df, table_name="multivariate_analysis_results", numerical_columns=numerical_columns, id_columns=['id_column'])

Contact

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyspark_eda-1.6.0.tar.gz (8.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyspark_eda-1.6.0-py3-none-any.whl (8.9 kB view details)

Uploaded Python 3

File details

Details for the file pyspark_eda-1.6.0.tar.gz.

File metadata

  • Download URL: pyspark_eda-1.6.0.tar.gz
  • Upload date:
  • Size: 8.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.3

File hashes

Hashes for pyspark_eda-1.6.0.tar.gz
Algorithm Hash digest
SHA256 51e108282d2360f0bb44adbc1b06df763cfacaa9f58347465421ee1ef21d9534
MD5 fb8a0a9b4b78db833266c33ab8caeba6
BLAKE2b-256 80bc33d86999e3e8ffe986ad935813532582072ccf06c9d7484519a2d03cdbe7

See more details on using hashes here.

File details

Details for the file pyspark_eda-1.6.0-py3-none-any.whl.

File metadata

  • Download URL: pyspark_eda-1.6.0-py3-none-any.whl
  • Upload date:
  • Size: 8.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.3

File hashes

Hashes for pyspark_eda-1.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7fc721a661080c441e6c304063dea4e5c852a6c61a1e79ce8c520f690c089bc9
MD5 394739563a4f17a3f35584bc6a3d65a6
BLAKE2b-256 fd208ba02e486fd0917adf660988dfc76bf629b09e96d2c89132a9abc0d20390

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page