A Python package for univariate and bivariate data analysis using PySpark
Project description
pyspark_eda
pyspark_eda
is a Python library for performing exploratory data analysis (EDA) using PySpark. It offers functionalities for both univariate and bivariate analysis, handling missing values, outliers, and visualizing data distributions.
Features
- Univariate analysis: Analyze numerical and categorical columns individually. Displays histogram and frequency distribution table if required.
- Bivariate analysis: Includes correlation, Cramer's V, and ANOVA. Displays scatter plot if required.
- Automatic handling: Deals with missing values and outliers seamlessly.
- Visualization: Provides graphical representation of data distributions and relationships.
Installation
You can install pyspark_eda
via pip:
pip install pyspark_eda
Example Usage
Univariate Analysis
from pyspark.sql import SparkSession
from pyspark_eda import get_univariate_analysis
# Initialize Spark session
spark = SparkSession.builder.appName('DataAnalysis').getOrCreate()
# Load your data into a PySpark DataFrame
df = spark.read.csv('your_data.csv', header=True, inferSchema=True)
# Perform univariate analysis
get_univariate_analysis(df,table_name="your_table_name",print_graphs=1 ,id_list=['id_column'])
Bivariate Analysis
from pyspark.sql import SparkSession
from pyspark_eda import get_bivariate_analysis
# Initialize Spark session
spark = SparkSession.builder.appName('DataAnalysis').getOrCreate()
# Load your data into a PySpark DataFrame
df = spark.read.csv('your_data.csv', header=True, inferSchema=True)
# Perform bivariate analysis
get_bivariate_analysis(df,table_name="bivariate_analysis_results", print_graphs=1, id_columns=['id_column'], correlation_analysis=1, cramer_analysis=1, anova_analysis=1)
Functions
get_univariate_analysis
Parameters
- df (DataFrame): The input PySpark DataFrame.
- table_name (str): The base table name to save the results
- print_graphs (int, optional): Whether to print graphs (1 for yes, 0 for no),default value is 0.
- id_list (list, optional): List of columns to exclude from analysis.
Description
Performs univariate analysis on the DataFrame and prints summary statistics and visualizations. It returns a table with the following columns : column , total_count, min, max, mean , mode, null_percentage, skewness , kurtosis, stddev ( which is the standard deviation), q1,q2 q3 (quartiles), mean_plus_3std, mean_minus_3std, outlier_percentage and frequency_distribution. You can display the table to view the results.
get_bivariate_analysis
Parameters
- df (DataFrame): The input PySpark DataFrame.
- table_name (str): The base table name to save the results
- print_graphs (int, optional): Whether to print graphs (1 for yes, 0 for no),default value is 0.
- id_columns (list, optional): List of columns to exclude from analysis.
- correlation_analysis (int, optional): Whether to perform correlation analysis (1 for yes, 0 for no),default value is 1.
- cramer_analysis (int, optional): Whether to perform Cramer's V analysis (1 for yes, 0 for no), default value is 1.
- anova_analysis (int, optional): Whether to perform ANOVA analysis (1 for yes, 0 for no),default value is 1.
Description
Performs bivariate analysis on the DataFrame, including Correlation, Cramer's V, and ANOVA. It returns a table with the following columns: Column_1, Column_2, Correlation_Coefficient, Cramers_V, Anova_F_Value,Anova_P_Value. You can display the table to view the results.
Contact
- Author: Tanya Irani
- Email: tanyairani22@gmail.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pyspark_eda-1.3.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 012f4dce98ee5f518d1744f0f646f9ad99bb959c224c060b4ae626773b462ffd |
|
MD5 | 4664af072253b0f8d03de5eb66f8c379 |
|
BLAKE2b-256 | 973dc8262891f468f069ff270fc07e9f8a623fa028ea1c3dca88230bc3abda00 |