Skip to main content

A powerful Python library designed to simplify data analysis by providing one-line solutions for cleaning, transformation, and visualization. Eliminate boilerplate code with intuitive, feature-rich functions tailored for analysts, researchers, and developers. Streamline workflows with advanced preprocessing and insightful visualizations, all in a single, user-friendly package.

Project description

DataAnalysts Package

DataAnalysts is a robust and versatile Python library meticulously designed to simplify and enhance the data analysis process. It caters to users across diverse domains, including students, professional data analysts, researchers, and enthusiasts. The library integrates powerful modules for data cleaning, transformation, and visualization, enabling seamless handling of datasets with minimal coding effort.

Whether you're dealing with messy datasets or preparing sophisticated visualizations, DataAnalysts offers an intuitive and interactive interface to perform your tasks with precision and efficiency.

🚀 Key Features

Data Cleaning:

  • Handle Missing Values:

    • Supports mean, median, and mode strategies for numeric columns.
    • Automatically fills missing categorical data using mode.
  • Remove Duplicates:

    • Eliminates duplicate rows to ensure data integrity.
  • Fill Unknown Values:

    • Replaces missing categorical data with 'Unknown' and numeric data with 0.
  • Convert Strings to Numeric:

    • Converts applicable string columns to numeric where possible.
  • Impute Missing Values by Group:

    • Fills missing values within groups defined by another column (e.g., by city or category).
  • Drop Low Variance Columns:

    • Removes columns with variance below a user-specified threshold.
  • Handle Outliers:

    • Detects and caps outliers in numeric columns using the IQR method.
  • Standardize Column Names:

    • Renames columns to lowercase, replaces spaces with underscores, and removes extra spaces.
  • Encode Categorical Variables:

    • Converts categorical columns to numeric codes.
  • Feature Scaling (Normalization):

    • Scales numeric columns to have a mean of 0 and a standard deviation of 1.

Interactive Cleaning:

  • Customizable Options:

    • Perform cleaning tasks step-by-step through a user-friendly menu interface.
  • Menu Options:

    • Handle missing values, remove duplicates, drop columns, rename columns, handle outliers, encode categorical variables, fill unknown values, convert strings to numeric, impute missing values by group, drop low variance columns, and scale features.

Data Transformation:

  • Scaling:

    • Standard, Min-Max, and Robust scaling strategies for numeric columns.
  • Encoding:

    • Label encoding for categorical columns.
  • Dimensionality Reduction:

    • Principal Component Analysis (PCA) to reduce dataset dimensions.
  • Duplicate Removal:

    • Automatically remove duplicate rows.
  • Low-Variance Feature Removal:

    • Remove features with variance below a defined threshold.
  • Interactive Transformation:

    • Choose transformation steps interactively.

Data Visualization:

  • Histogram:

    • Plot a histogram with advanced customization for bins, labels, and title.
  • Bar Chart:

    • Generate bar charts with customizable sizes, labels, and colors.
  • Line Plot:

    • Create line plots with options for markers, colors, and labels.
  • Scatter Plot:

    • Generate scatter plots with hue-based grouping for better insights.
  • Heatmap:

    • Visualize correlation matrices using customizable heatmaps.
  • Pair Plot:

    • Plot pairwise relationships between numeric columns.
  • Box Plot:

    • Create box plots to visualize data distribution and outliers.
  • Violin Plot:

    • Generate violin plots to show data distribution with additional density insights.

Interactive Visualization:

  • Customizable Options:

    • Perform visualizations interactively through a user-friendly menu interface.
  • Menu Options:

    • Choose from histograms, bar charts, line plots, scatter plots, heatmaps, pair plots, box plots, and violin plots.

Data Loading:

  • CSV Files:

    • Easily load datasets from CSV files with automatic logging.
  • Excel Files:

    • Load data from Excel sheets with customizable sheet selection.

Error Handling:

  • Robust Exception Handling:

    • Provides clear error messages for debugging and ensures smooth execution.

Interactive Tools:

  • Data Cleaning:

    • Step-by-step interactive data cleaning options.
  • Data Transformation:

    • Hands-on transformation with flexible menu options.
  • Data Visualization:

    • Interactive plotting with multiple customization options.

🛠️ Installation Steps

1. Install the Package from PyPI

To use the library in Google Colab or your local environment, install it directly from PyPI:

pip install dataanalysts
!pip install dataanalysts

💡 Usage Examples

1. Import the Library

import dataanalysts as da
import pandas as pd

2. Load Data

df = da.csv('data.csv')
df_excel = da.excel('data.xlsx', sheet_name='Sheet1')

Data Cleaning

It simplifies common data preprocessing tasks like handling missing values, removing duplicates, fixing structural errors, and more. With this module, you can efficiently prepare your data for analysis or modeling.


Key Features

  1. Remove Duplicates: Automatically detect and remove duplicate rows from your dataset.
  2. Handle Missing Values: Fill or drop missing values using customizable strategies (mean, median, mode, or specific values).
  3. Fix Structural Errors: Standardize text data by converting it to lowercase or uppercase and correcting inconsistencies.
  4. Handle Outliers: Detect and handle outliers in numerical columns using the Interquartile Range (IQR) method or custom thresholds.
  5. Convert Data Types: Convert columns to specific data types like integer, float, or string.
  6. Encode Categorical Variables: Perform one-hot encoding or label encoding for categorical columns.
  7. Scale Features: Normalize or standardize numerical columns using Min-Max or Standard scaling.
  8. Filter Rows: Filter rows based on conditions like column values or ranges.
  9. Split Columns: Split a single column into multiple columns using a specified delimiter.
  10. Validate Data: Ensure numerical values are within specified ranges and clip those that fall outside.
  11. Interactive Cleaning: Provides an interactive menu to perform various cleaning tasks step by step.

Syntax and Examples

1. Remove Duplicates

Remove duplicate rows from the dataset.

Syntax:

da.clean(df, strategy='remove_duplicates')

Example:

cleaned_df = da.clean(df, strategy='remove_duplicates')

2. Handle Missing Values

Fill or drop missing values using various strategies.

Syntax:

da.clean(df, strategy='handle_missing', strategy_type='mean')

Options:

  • strategy_type: 'mean', 'median', 'mode', or 'fill'
  • value: Custom value for filling (if strategy_type='fill')

Example:

# Fill missing values with mean
cleaned_df = da.clean(df, strategy='handle_missing', strategy_type='mean')

# Fill missing values with custom values
cleaned_df = da.clean(df, strategy='handle_missing', strategy_type='fill', value={'Age': 25, 'Gender': 'Unknown'})

3. Fix Structural Errors

Standardize text data by fixing structural inconsistencies.

Syntax:

da.clean(df, strategy='fix_structural', column='Category', strategy='lowercase')

Options:

  • column: Column to clean.
  • strategy: 'lowercase' or 'uppercase'.

Example:

cleaned_df = da.clean(df, strategy='fix_structural', column='Category', strategy='lowercase')

4. Handle Outliers

Detect and handle outliers in numerical columns.

Syntax:

da.clean(df, strategy='handle_outliers', column='Score')

Options:

  • column: Column to handle outliers.

Example:

cleaned_df = da.clean(df, strategy='handle_outliers', column='Score')

5. Convert Data Types

Convert columns to specific data types.

Syntax:

da.clean(df, strategy='convert_dtype', column='Age', dtype='int')

Options:

  • column: Column to convert.
  • dtype: Target data type ('int', 'float', 'str').

Example:

cleaned_df = da.clean(df, strategy='convert_dtype', column='Age', dtype='int')

6. Encode Categorical Variables

Perform one-hot encoding for categorical variables.

Syntax:

da.clean(df, strategy='encode_categorical', columns=['Category'])

Options:

  • columns: List of categorical columns.

Example:

cleaned_df = da.clean(df, strategy='encode_categorical', columns=['Category'])

7. Scale Features

Normalize or standardize numerical columns.

Syntax:

da.clean(df, strategy='scale', columns=['Age'], scaler='minmax')

Options:

  • columns: List of numerical columns to scale.
  • scaler: 'minmax' or 'standard'.

Example:

cleaned_df = da.clean(df, strategy='scale', columns=['Age'], scaler='minmax')

8. Filter Rows

Filter rows based on conditions.

Syntax:

da.clean(df, strategy='filter', condition="Age > 30")

Options:

  • condition: String condition to filter rows.

Example:

cleaned_df = da.clean(df, strategy='filter', condition="Age > 30")

9. Split Columns

Split a single column into multiple columns using a specified delimiter.

Syntax:

da.clean(df, strategy='split_column', column='FullName', new_columns=['FirstName', 'LastName'], delimiter=' ')

Options:

  • column: Column to split.
  • new_columns: List of new column names.
  • delimiter: Delimiter to use for splitting.

Example:

cleaned_df = da.clean(df, strategy='split_column', column='FullName', new_columns=['FirstName', 'LastName'], delimiter=' ')

10. Validate Data

Ensure numerical values are within specified ranges.

Syntax:

da.clean(df, strategy='validate', column='Score', min_value=0, max_value=100)

Options:

  • column: Column to validate.
  • min_value: Minimum acceptable value.
  • max_value: Maximum acceptable value.

Example:

cleaned_df = da.clean(df, strategy='validate', column='Score', min_value=0, max_value=100)

11. Interactive Cleaning

Perform interactive cleaning step by step using a menu-based approach.

Syntax:

da.interactive_clean(df)

Example:

cleaned_df = da.interactive_clean(df)

Comprehensive Example

Here’s how you can use the clean function to perform multiple cleaning operations:

import dataanalysts as da
import pandas as pd

# Sample dataset
data = {
    'Name': ['Alice', 'Bob', 'Alice'],
    'Age': [25, None, 25],
    'Gender': ['F', 'M', None],
    'FullName': ['Alice Smith', 'Bob Johnson', 'Alice Smith']
}
df = pd.DataFrame(data)

# Remove duplicates
cleaned_df = da.clean(df, strategy='remove_duplicates')

# Handle missing values
cleaned_df = da.clean(cleaned_df, strategy='handle_missing', strategy_type='fill', value={'Age': 30, 'Gender': 'Unknown'})

# Fix structural errors
cleaned_df = da.clean(cleaned_df, strategy='fix_structural', column='Gender', strategy='uppercase')

# Split column
cleaned_df = da.clean(cleaned_df, strategy='split_column', column='FullName', new_columns=['FirstName', 'LastName'], delimiter=' ')

# Interactive cleaning
cleaned_df = da.interactive_clean(cleaned_df)

Logging

  • Logs are stored in the cleaner.log file.
  • Each cleaning step is logged with details about the operation and parameters used.
  • Errors during cleaning are logged for debugging purposes.

This module simplifies data cleaning, making it accessible and efficient for analysts, researchers, and developers alike.

4. Data Transformation

The Data Transformation Module enables comprehensive data preprocessing and transformation for datasets, including scaling, dimensionality reduction, encoding, and more. The module supports both direct and interactive transformation methods.


Key Features

  • Scaling: Standard, Min-Max, and Robust scaling strategies for numeric columns.
  • Encoding: Label encoding for categorical columns.
  • Dimensionality Reduction: Principal Component Analysis (PCA) to reduce dataset dimensions.
  • Duplicate Removal: Automatically remove duplicate rows.
  • Low-Variance Feature Removal: Remove features with variance below a defined threshold.
  • Interactive Transformation: Choose transformation steps interactively.

Syntax and Examples

1. Scaling

Scales numeric columns based on the selected strategy:

  • Standard Scaling: Centers data around mean (0) with standard deviation (1).
  • Min-Max Scaling: Scales data to a range of [0, 1].
  • Robust Scaling: Handles outliers by scaling data based on the interquartile range (IQR).

Syntax:

import dataanalysts as da

# Standard Scaling
df_transformed = da.transform(df, strategy='standard')

# Min-Max Scaling
df_transformed = da.transform(df, strategy='minmax')

# Robust Scaling
df_transformed = da.transform(df, strategy='robust')

2. Encoding

Encodes categorical columns into numeric values using label encoding. This is particularly useful for machine learning models that require numeric data.

Syntax:

# Encode categorical columns
df_transformed = da.transform(df, encode_categorical=True)

3. Duplicate Removal

Automatically removes duplicate rows from the dataset.

Syntax:

# Remove duplicate rows
df_transformed = da.transform(df, remove_duplicates=True)

4. Low-Variance Feature Removal

Removes features with variance below a specified threshold to reduce noise in the data.

Syntax:

# Remove features with variance below 0.01
df_transformed = da.transform(df, remove_low_variance=True, variance_threshold=0.01)

5. Dimensionality Reduction (PCA)

Uses Principal Component Analysis to reduce the number of features while retaining most of the variance in the dataset.

Syntax:

# Apply PCA to retain 3 components
df_pca = da.transform(df_transformed, reduce_dimensionality=True, n_components=3)

6. Interactive Transformation

Provides an interactive menu for selecting transformation steps one at a time.

Menu Options:

  1. Apply Standard Scaling
  2. Apply Min-Max Scaling
  3. Apply Robust Scaling
  4. Encode Categorical Columns
  5. Remove Duplicates
  6. Remove Low-Variance Features
  7. Apply PCA for Dimensionality Reduction
  8. Exit Transformation

Syntax:

# Perform interactive transformation
df_interactive_transform = da.interactive_transform(df)

Comprehensive Example

Here’s an end-to-end example combining multiple transformations:

import dataanalysts as da
import pandas as pd

# Sample dataset
data = {
    'Age': [25, 30, 35, 40, 45],
    'Salary': [50000, 60000, 70000, 80000, 90000],
    'Department': ['HR', 'IT', 'Finance', 'IT', 'HR']
}
df = pd.DataFrame(data)

# Step 1: Apply standard scaling
df_transformed = da.transform(df, strategy='standard')

# Step 2: Apply PCA to reduce dimensions to 2 components
df_pca = da.transform(df_transformed, reduce_dimensionality=True, n_components=2)

# Step 3: Perform additional transformations interactively
df_final = da.interactive_transform(df_pca)

print(df_final)

Logging

  • Logs are stored in the transformer.log file.
  • Each transformation step is logged with details about the operation and parameters used.
  • Errors during transformations are also logged for debugging purposes.

5. Data Visualization

Data Visualization

The Data Visualization Module provides advanced tools for creating insightful and customized visual representations of your dataset. With this module, you can generate a variety of plots, including histograms, scatter plots, heatmaps, and more, with customization options for size, titles, and styles.


Key Features

  • Histogram: Visualize the distribution of a single numeric column.
  • Bar Chart: Compare values across categories.
  • Line Chart: Display trends over time or sequential data.
  • Scatter Plot: Show relationships between two numeric columns.
  • Heatmap: Visualize correlations between numeric columns.
  • Pair Plot: Display pairwise relationships in a dataset.
  • Box Plot: Compare distributions of a numeric column across categories.
  • Violin Plot: Combine box plot and density plot for richer insights.
  • Interactive Visualization: Select and generate plots interactively.

Syntax and Examples

1. Histogram

Visualize the distribution of a single numeric column.

Syntax:

da.histogram(df, column='age', bins=30, kde=True)

Customization Options:

  • bins: Number of bins for the histogram.
  • kde: Whether to display the Kernel Density Estimate.
  • size: Tuple specifying figure size.
  • title_fontsize: Font size for the title.
  • axis_fontsize: Font size for axis labels.
  • custom_title: Custom title for the chart.

2. Bar Chart

Compare values across categories.

Syntax:

da.barchart(df, x_col='city', y_col='population')

Customization Options:

  • size: Tuple specifying figure size.
  • title_fontsize: Font size for the title.
  • axis_fontsize: Font size for axis labels.
  • custom_title: Custom title for the chart.

3. Line Chart

Display trends over time or sequential data.

Syntax:

da.linechart(df, x_col='date', y_col='sales')

Customization Options:

  • size: Tuple specifying figure size.
  • title_fontsize: Font size for the title.
  • axis_fontsize: Font size for axis labels.
  • custom_title: Custom title for the chart.

4. Scatter Plot

Show relationships between two numeric columns.

Syntax:

da.scatter(df, x_col='height', y_col='weight', hue='gender')

Customization Options:

  • hue: Column for color encoding.
  • size: Tuple specifying figure size.
  • title_fontsize: Font size for the title.
  • axis_fontsize: Font size for axis labels.
  • custom_title: Custom title for the chart.

5. Heatmap

Visualize correlations between numeric columns.

Syntax:

da.heatmap(df)

Customization Options:

  • annot: Whether to annotate the heatmap with correlation values.
  • cmap: Colormap for the heatmap.
  • size: Tuple specifying figure size.
  • title_fontsize: Font size for the title.
  • custom_title: Custom title for the chart.

6. Pair Plot

Display pairwise relationships in a dataset.

Syntax:

da.pairplot(df, hue='category')

Customization Options:

  • hue: Column for color encoding.
  • size: Tuple specifying figure size for each subplot.
  • title_fontsize: Font size for the title.
  • custom_title: Custom title for the chart.

7. Box Plot

Compare distributions of a numeric column across categories.

Syntax:

da.boxplot(df, x_col='region', y_col='sales')

Customization Options:

  • size: Tuple specifying figure size.
  • title_fontsize: Font size for the title.
  • axis_fontsize: Font size for axis labels.
  • custom_title: Custom title for the chart.

8. Violin Plot

Combine box plot and density plot for richer insights.

Syntax:

da.violinplot(df, x_col='region', y_col='sales')

Customization Options:

  • size: Tuple specifying figure size.
  • title_fontsize: Font size for the title.
  • axis_fontsize: Font size for axis labels.
  • custom_title: Custom title for the chart.

9. Interactive Visualization

Provides an interactive menu for generating various plots one at a time.

Menu Options:

  1. Histogram
  2. Bar Chart
  3. Line Plot
  4. Scatter Plot
  5. Heatmap
  6. Pair Plot
  7. Box Plot
  8. Violin Plot
  9. Exit Visualization

Syntax:

# Perform interactive visualization
da.interactive_plot(df)

Comprehensive Example

Here’s how you can use the visualizer functions to create multiple plots:

import dataanalysts as da
import pandas as pd

# Sample dataset
data = {
    'age': [25, 30, 35, 40, 45],
    'salary': [50000, 60000, 70000, 80000, 90000],
    'city': ['NY', 'LA', 'SF', 'CHI', 'HOU'],
    'gender': ['M', 'F', 'F', 'M', 'M']
}
df = pd.DataFrame(data)

# Histogram
da.histogram(df, column='age', bins=20, kde=True)

# Bar Chart
da.barchart(df, x_col='city', y_col='salary')

# Scatter Plot
da.scatter(df, x_col='age', y_col='salary', hue='gender')

# Heatmap
da.heatmap(df)

# Interactive Visualization
da.interactive_plot(df)

Logging

  • Logs are stored in the visualizer.log file.
  • Each visualization step is logged with details about the operation and parameters used.
  • Errors during visualizations are also logged for debugging purposes.

This module provides highly customizable and interactive visualizations to gain insights from your data effectively.


🤝 Contributing

Contributions are welcome! Please submit a pull request via our GitHub Repository.


📜 License

This project is licensed under the MIT License. See the LICENSE file for details.


🛠️ Support

If you encounter any issues, feel free to open an issue on our GitHub Issues page.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dataanalysts-0.2.5.tar.gz (19.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dataanalysts-0.2.5-py3-none-any.whl (15.4 kB view details)

Uploaded Python 3

File details

Details for the file dataanalysts-0.2.5.tar.gz.

File metadata

  • Download URL: dataanalysts-0.2.5.tar.gz
  • Upload date:
  • Size: 19.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for dataanalysts-0.2.5.tar.gz
Algorithm Hash digest
SHA256 c8a57b6bad8a2bb8789b1d727c852002e68d290f9e26a28f4d1147f8fe170236
MD5 692c815cd70367628344f1dc5f4e5af6
BLAKE2b-256 ad1800fe7d8a82a243f51bb983984f26b079aa934f9afc9f4dcf65b2757c434d

See more details on using hashes here.

File details

Details for the file dataanalysts-0.2.5-py3-none-any.whl.

File metadata

  • Download URL: dataanalysts-0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 15.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for dataanalysts-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 a715fda7af00045821a60d266f90dabe1c114876dc0327c578c43574e3798986
MD5 3bfdef448451d9aca4b9889e880bb4dc
BLAKE2b-256 04ccf136183154873a570d5fac9e37c333a722674f3f9af36a6e11bf512931a9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page