A Python package for efficient data cleaning and preprocessing
Project description
Pyspan Overview
🎉🎉🎉 Welcome to pyspan! 🎉🎉🎉
pyspan is a Python package designed to simplify data cleaning and preprocessing tasks, especially for Pandas DataFrames. It offers a range of functions to enhance data handling, including:
- Handling missing values
- Detecting outliers and anomalies
- Performing spell checks and error identification
- Logging function calls and parameters efficiently
- Undo functionality for reverting recent changes made to the data
Key Features:
- Pandas Support (Optional): You can use pyspan with or without Pandas. It supports data passed in formats such as list, string, tuple, dict, or JSON. This makes it versatile for different data sources and formats.
- Undo Changes: Need to undo the last changes made to your data? pyspan includes an undo feature to help you revert the most recent modifications.
Whether you're working with a Pandas DataFrame or any other data structure, pyspan helps streamline your data preprocessing tasks, making it easier to clean, preprocess, and analyze your data.
🚀 Installation Process To get started with pyspan, install the package using pip:
pip install pyspan
Functions
1. handle_nulls
Description
The handle_nulls function handles missing values in a DataFrame by offering several options: removing rows or columns with nulls, replacing nulls with a custom value, or imputing them using various methods like mean, median, or mode. This function can also drop rows or columns exceeding a specified threshold of null values.
Parameters:
-
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
The input DataFrame containing missing values. -
columns(Optional[Union[str, List[str]]], default:None):
A single column name or a list of column names to apply the operation. If not provided, the function applies to the entire DataFrame. -
action(str, default:'remove'):
The strategy to handle missing values. Options include:'remove': Drops rows with null values in the specified columns.'replace': Replaces null values with a specified custom value.'impute': Imputes missing values using a chosen strategy (e.g., mean, median, mode, interpolate, forward fill, backward fill).
-
with_val(Optional[Union[int, float, str]], default:None):
The custom value to replace NaNs with, applicable only ifaction='replace'. -
by(Optional[str], default:None):
The strategy for imputing missing values, applicable only ifaction='impute'. Available options are:'mean': Impute with the mean of the column.'median': Impute with the median of the column.'mode': Impute with the most frequent value in the column.'interpolate': Use linear interpolation to impute missing values.'forward_fill': Propagate the last valid observation forward.'backward_fill': Use the next valid observation to fill the gaps backward.
-
inplace(bool, default:False):
IfTrue, modifies the DataFrame in place and returnsNone. IfFalse, returns a new DataFrame with null values handled according to the specified action. -
threshold(Optional[float], default:None): Sets a threshold (0-100) for the percentage of missing values. If a row or column exceeds this threshold, it will be dropped. Ignored if None. -
axis(str, default:'rows'): Specifies whether to apply the threshold to'rows'or'columns'.
Returns:
Optional[pd.DataFrame, List, dict, str]: The Data with null values handled, orNoneifinplace=True.
2. remove
Description
The remove function handles two types of DataFrame modifications: removing duplicate rows or removing specific columns. The type of operation is specified by the operation parameter.
Parameters:
-
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
The input DataFrame on which the operation will be performed. -
operation(str):
Specifies the type of removal operation. Options are:'duplicates': To remove duplicate rows.'columns': To remove specified columns.
-
columns(Optional[Union[str, List[str]]], default:None):
Columns to consider for the operation:- For
'duplicates': Columns to check for duplicates. - For
'columns': Columns to be removed.
- For
-
keep(Optional[str], default:'first'):
Determines which duplicates to keep. Options are:'first': Keep the first occurrence of each duplicate.'last': Keep the last occurrence of each duplicate.'none': Remove all duplicates.
-
consider_all(bool, default:True):
Relevant only whenoperationis'duplicates'. IfTrue, removes the entire row if any duplicates are found in the specified columns. -
inplace(bool, default:False):
IfTrue, modifies the DataFrame in place. IfFalse, returns a new DataFrame.
Returns:
Optional[pd.DataFrame, List, dict, str]: Data with duplicates removed or columns removed according to the specified criteria. ReturnsNoneifinplace=True.
Raises:
ValueError: If invalid columns are specified or operation is invalid.TypeError: If input types are incorrect.
3. refine
Description
The refine function is designed to clean and standardize a DataFrame by recommending readable column names and optionally cleaning row data. It suggests renaming columns to remove special characters and applies consistent formatting. Additionally, it can clean row data by removing unwanted characters and extra spaces.
Parameters:
-
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
DataFrame to rename columns in. -
clean_rows(bool, default:True): If True, the function also cleans each row content by removing special characters and extra spaces.
Returns:
pd.DataFrame,List,dict,str: Data with columns renamed.
4. manual_rename_columns
Description
Renames columns in a DataFrame using a provided dictionary mapping.
Parameters:
-
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
DataFrame to rename columns in. -
rename_dict(dict):
Dictionary mapping current column names to new column names.Returns:
-
pd.DataFrame,List,dict,str: Data with columns renamed according torename_dict.
5. format_dt
Description
Adds additional date/time-based columns to a DataFrame and formats date/time columns.
Parameters:
-
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
DataFrame to which new date/time features will be added. -
columns(str):
The name of the column containing date/time data. -
day(bool, default:False):
IfTrue, adds a column with the day of the month. -
month(bool, default:False):
IfTrue, adds a column with the month. -
year(bool, default:False):
IfTrue, adds a column with the year. -
quarter(bool, default:False):
IfTrue, adds a column with the quarter of the year. -
hour(bool, default:False):
IfTrue, adds a column with the hour of the day. -
minute(bool, default:False):
IfTrue, adds a column with the minute of the hour. -
day_of_week(bool, default:False):
IfTrue, adds a column with the day of the week (i.e Monday, Sunday). -
date_format(str, default:"%Y-%m-%d"):
Desired date format. -
time_format(str, default:"%H:%M:%S"):
Desired time format. -
from_timezone(Optional[str], default:None):
Original timezone of the datetime column(s). -
to_timezone(Optional[str], default:None):
Desired timezone for the datetime column(s).
Returns:
pd.DataFrame,List,dict,str: Data with added and formatted date/time features.
6. split_column
Description
Splits a single column into multiple columns based on a delimiter.
Parameters:
-
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
DataFrame containing the column to split. -
column(str):
Name of the column to be split. -
delimiter(str, default:None):
Delimiter to use for splitting (optional), e.g.,,,;,|.
Returns:
pd.DataFrame,List,dict,str: Data with the specified column split into multiple columns.
7. detect_errors
Description
Detects and flags data entry errors including invalid dates and misspelled words.
Parameters:
-
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
DataFrame to analyze. -
date_columns(Optional[List[str]], default:None):
List of columns to check for invalid dates. -
numeric_columns(Optional[List[str]], default:None):
List of columns to check for numeric format errors. -
text_columns(Optional[List[str]], default:None):
List of text columns to perform spell checking on. -
date_format(str, default:'%Y-%m-%d'):
Expected date format for validation.
Returns:
pd.DataFrame,List,dict,str: Data with detected errors flagged.
8. convert_type
Description
Recommends and applies data type conversions based on the analysis of each column's data.
Parameters:
-
df(pd.DataFrameorpd.Series,List,dict,tuple,np.ndarray,str):
DataFrame or Series to analyze. -
columns(Optional[List[str]], default:None):
Specific column to analyze (optional).
Returns:
pd.DataFrameorpd.Series,List,dict,str: Data or Series with recommended data type conversions applied.
9. detect_outliers
Description
The detect_outliers function identifies outliers in a specified column of a DataFrame using traditional methods like Interquartile Range (IQR) and Z-Score, or advanced anomaly detection algorithms (Isolation Forest, Local Outlier Factor, and DBSCAN). The function removes outliers and prints the data's shape before and after outlier removal.
Parameters:
-
df(pd.DataFrameorpd.Series,List,dict,tuple,np.ndarray,str):
DataFrame or Series to analyze. -
method(str, default:'iqr'):
Outlier detection method. Options are:'z-score': Using z-score.'iqr': Using interquartile range.
-
threshold(float, default:1.5):
Threshold for outlier detection. -
columns(Optional[List[str]], default:None):
List of columns or single column to apply outlier or anomalies detection on. -
handle_missing(bool, default:True):
Whether to handle missing values by dropping them. -
anomaly_method(Optional[str], default:None): Specifies an anomaly detection model. If provided, it overrides the method parameter. Options are:'isolation_forest': Uses the Isolation Forest model.'lof': Uses the Local Outlier Factor model.'dbscan': Uses the DBSCAN clustering algorithm.
-
contamination(float, default:0.05): Proportion of outliers in the data for'isolation_forest'and'lof'. -
n_neighbors(int, default:20):
Number of neighbors for the Local Outlier Factor model. -
eps(float, default:0.5): Maximum distance for DBSCAN clustering to group points as neighbors. -
min_samples(int, default:5): Minimum samples for DBSCAN to define a cluster.
Returns:
pd.DataFrame,List,dict,str: Data withoutliersoranomaliesdetected.
10. Logging Functions
display_logs
Description
Display all used functions details log enteries.
Returns:
None
11. remove_chars
Description
Cleans and formats text in specified columns of a DataFrame by trimming spaces and removing custom characters.
Parameters:
-
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
The DataFrame containing the columns to be cleaned. -
columns(Optional[List[str]]):
List of column names to apply the cleaning function. -
strip_all(bool, default:False):
IfTrue, removes all extra spaces within the text. -
custom_characters(Optional[str], default:None):
String of custom characters to be removed.
Returns:
pd.DataFrame,List,dict,str: Data with specified columns cleaned.
12. reformat
Description
Applies the data type and formatting from a reference column to a target column in the same DataFrame.
Parameters:
-
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
The DataFrame containing both the target and reference columns. -
target_column(str):
The name of the column to format. -
reference_column(str):
The name of the column to borrow formatting from.
Returns:
pd.DataFrame,List,dict,str: Data with the target column formatted based on the reference column.
Raises:
ValueError: If the target or reference column does not exist.TypeError: If the reference column's type cannot be applied to the target column or if the data type is unsupported.
13. scale_data
Description
Scales data in a DataFrame using specified scaling methods: Min-Max Scaling, Robust Scaling, or Standard Scaling.
Parameters:
-
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
The DataFrame containing the data to be scaled. -
method(str, default:'minmax'):
Scaling method to use. Options are:'minmax': Min-Max Scaling.'robust': Robust Scaling.'standard': Standard Scaling.
-
columns(Optional[List[str]], default:None):
List of column names to apply scaling on. IfNone, scales all numerical columns.
Returns:
pd.DataFrame,List,dict,str: Data with specified columns scaled according to the selected method.
14. undo
Description
Reverts a DataFrame to the previous state by undoing the most recent modification. The undo function works in conjunction with the track_changes decorator, which saves the current DataFrame state before any changes are made.
Parameters: No parameters are passed to the undo function, but it operates on a globally tracked DataFrame (_df) and its history. Returns:
Returns:
pd.DataFrame,List,dict,str: The Data in its previous state before the most recent modification.
Note: If no changes have been made, undo will print a message saying there are no recent changes to undo.
15. sample_data
Description Loads the sample_data, a dataset containing various customer sales records. This dataset includes columns with missing values, inconsistent formats, and outliers, providing an opportunity for data cleaning and manipulation.
Parameters: This function does not take any parameters.
Returns:
-pd.DataFrame: A DataFrame containing the Customer Sales Data with the following columns:
-CustomerID: Unique identifier for each customer.
-Age: Age of the customer (may include missing values and inconsistent formats).
-Gender: Gender of the customer (may include missing values).
-PurchaseHistory: List of previous purchases made by the customer.
-ProductCategory: Category of the product purchased (e.g., "Electronics", "Apparel").
-PurchaseDate: Date of the purchase (may include inconsistent formats).
-AmountSpent: Total amount spent on the purchase, including outliers.
-PaymentMethod: Method of payment used (e.g., "Credit Card", "Cash"; includes mixed data types).
-Country: Country of the customer.
-MembershipStatus: Membership status of the customer (e.g., "Regular", "Gold", "Platinum"; may include missing values).
-PhoneNumber: Phone number of the customer, with various formats.
-DiscountCode: Discount code used in the transaction, with duplicates included.
Note: This dataset contains a variety of messy data, such as inconsistent date formats, mixed data types, missing values, and outliers, making it suitable for testing data cleaning and preprocessing techniques.
16. convert_unit
Description The convert_unit function is designed to detect and convert units within a specific column of a Pandas DataFrame. This function supports conversions across multiple categories like length, mass, volume, temperature, and more. It efficiently handles data conversions while accounting for units with inconsistent formats or missing values. This function is useful for data cleaning and preprocessing, ensuring consistency across datasets with mixed unit formats.
Parameters:
df(pd.DataFrame,List,dict,tuple,np.ndarray,str):
The DataFrame containing the data for unit conversion.
-columns (Optional[List[str]]):
The name of the columns where unit conversion should be applied.
-unit_category (str):
The category of units for conversion (e.g., 'length', 'mass', 'temperature').
-from_unit (str):
The unit of the values to be converted (e.g., 'cm', 'kg').
-to_unit (str):
The target unit for the conversion (e.g., 'm', 'g').
Returns:
-pd.DataFrame, List, dict,str: A new Data with the values in the specified column converted to the target unit. It also retains any non-numeric values in the original column.
Supported Unit Categories and Examples:
-length: 'mm', 'cm', 'm', 'km', 'in', 'ft', 'yd', 'mi'
-mass: 'mg', 'g', 'kg', 'ton', 'lb', 'oz'
-time: 's', 'min', 'h', 'day', 'week', 'month', 'year'
-volume: 'ml', 'l', 'm3', 'ft3', 'gal'
-temperature: 'C', 'F', 'K'
-speed: 'm/s', 'km/h', 'mph'
-energy: 'J', 'kJ', 'cal'
-area: 'm2', 'km2', 'ft2'
-pressure: 'Pa', 'bar', 'atm', 'psi'
17. resize_dataset
Description The resize_dataset function allows you to intelligently resize a dataset by either expanding it with synthetic data or reducing it through smart sampling techniques. This function is particularly useful when you need to balance class distributions, create larger datasets for training, or reduce dataset size while preserving important characteristics.
Parameters:
-
df(pd.DataFrame):
The input DataFrame to resize. Note: Unlike other functions in pyspan, resize_dataset only accepts pandas DataFrames. -
target_size(int):
Desired size of the output DataFrame. -
method(str, default:'auto'):
Method to use for resizing:'auto': Automatically choose based on target_size'expand': Generate synthetic data to increase dataset size'reduce': Remove data to decrease dataset size
-
balance_classes(bool, default:False):
If True, try to balance classes in the target_column (for classification tasks). -
target_column(Optional[str], default:None):
Column name containing the target variable (for classification tasks). -
preserve_correlation(bool, default:True):
If True, try to preserve correlations when generating synthetic data. -
remove_outliers(bool, default:False):
If True, remove outliers when reducing dataset. -
outlier_threshold(float, default:2.0):
Z-score threshold for outlier removal. -
synthetic_method(str, default:'statistical'):
Method for generating synthetic data:'statistical': Use statistical properties of original data'neighbor': Use nearest neighbor interpolation
-
random_state(int, default:42):
Random seed for reproducibility.
Returns:
pd.DataFrame: Resized DataFrame with either expanded or reduced data.
18. reduce_vif
Description
The reduce_vif function removes multicollinearity from a dataset using Variance Inflation Factor (VIF). It iteratively identifies and removes features with high VIF values until all remaining features have VIF values below the specified threshold. This is particularly useful for feature selection and improving model stability in regression analysis.
Parameters:
-
data(pd.DataFrame,List,dict,tuple,np.ndarray,str):
The input data in any supported format. The data should contain numeric features for VIF calculation. -
threshold(float, default:5.0):
The VIF threshold above which features will be considered for removal. A common rule of thumb is:- VIF = 1: Variables are uncorrelated
- VIF < 5: Moderate correlation
- VIF > 5: High correlation (should consider removal)
-
handle_na(str, default:'drop'):
How to handle NaN or infinite values. Options include:'drop': Drops rows containing NaN or infinite values'fill': Fills NaN or infinite values with 0
-
return_format(str, default:'dataframe'):
The format of the output data. Options include:'dataframe': Returns a pandas DataFrame'dict': Returns a dictionary with column names as keys'list': Returns a list of lists'json': Returns a JSON string
Returns:
Union[pd.DataFrame, dict, list, str]: The cleaned dataset with reduced multicollinearity in the specified format.
19. compress
Description
The compress function optimizes memory usage of a DataFrame by automatically downcasting numeric columns to the smallest possible data type and converting object columns to categorical when appropriate. This function is particularly useful when working with large datasets that consume excessive memory.
Parameters:
df(pd.DataFrame):
The DataFrame to compress. The function will analyze each column and:- For integers: Downcast to the smallest possible integer type (uint8, int8, etc.)
- For floats: Downcast to float32 or smaller if possible
- For object/string columns: Convert to categorical if beneficial
Returns:
pd.DataFrame: The compressed DataFrame with optimized memory usage. The function also prints:- Initial memory usage
- Final memory usage
- Percentage of memory saved
Note: The compression is lossless for integers (preserves exact values) but may introduce small precision changes for floating-point numbers. The function automatically determines the best data type for each column based on its content and range of values.
Example Usage
Here are some examples to illustrate the usage of the functions provided in pyspan:
import pandas as pd
from pyspan import handle_nulls, remove, refine, manual_rename_columns
from pyspan import format_dt, split_column, detect_errors, convert_type, detect_outliers
from pyspan import display_logs, remove_chars, reformat, scale_data, undo
from pyspan import sample_data, convert_unit, resize_dataset, reduce_vif, compress
# Load a dataset
df = pd.read_csv('/content/Your_Dataset.csv')
# Example usage of handle_nulls
# 1. Remove rows with null values in the column
df_cleaned_remove = handle_nulls(df, columns='column ', action='remove', inplace=False)
# 2. Replace null values in the column with 'Unknown'
df_cleaned_replace = handle_nulls(df, columns='column ', action='replace', with_val='Unknown', inplace=False)
# 3. Impute null values in the column using the mode
df_cleaned_impute_mode = handle_nulls(df, columns='column ', action='impute', by='mode', inplace=False)
# 4. For rows
df_cleaned = handle_nulls(df, threshold=50, axis='rows')
# 5. For columns
df_cleaned = handle_nulls(df, threshold=30, axis='columns')
# Example usage of remove
# Remove duplicate rows based on column
df_no_duplicates = remove(df, operation='duplicates', columns='Type', keep='first', inplace=False)
# Remove specified columns
df_no_columns = remove(df, operation='columns', columns=['Type', 'Date'], inplace=False)
# Example usage of refine
# Refine DataFrame columns
df_cleaned = refine(df)
# Refine DataFrame with column and row cleaning
df_cleaned = refine(df, clean_rows=True)
# Example usage of manual_rename_columns
rename_dict = {'OldName': 'NewName'}
df_renamed_dict = manual_rename_columns(df, rename_dict)
# Example usage of format_dt
# Apply the function to add features and format the 'timestamp' column
processed_df = format_dt(df, columns='timestamp', day=True, month=True, year=True, quarter=True,
hour=True, minute=True, day_of_week=True, date_format="%d-%m-%Y",
time_format="%I:%M %p", from_timezone='UTC', to_timezone='America/New_York')
# Example usage of split_column
df_split = split_column(df, column='ColumnName', delimiter=',')
# Example usage of detect_errors
errors = detect_errors(df, date_columns=['DateColumn'], numeric_columns=['NumericColumn'], text_columns=['TextColumn'])
# Example usage of convert_type
df_converted = convert_type(df)
# Example usage of detect_outliers
# Detect outliers using IQR for multiple columns
cleaned_data_iqr = detect_outliers(df, columns=['column', 'column'], method='iqr', threshold=1.5)
# Detect anomalies using Isolation Forest for multiple columns
cleaned_data_isolation_forest = detect_outliers(df, columns=['column', 'column'], anomaly_method='isolation_forest', contamination=0.05)
# Detect anomalies using Local Outlier Factor (LOF)
cleaned_data_lof = detect_outliers(df, columns='column', anomaly_method='lof', contamination=0.05, n_neighbors=20)
# Detect anomalies using DBSCAN
cleaned_data_dbscan = detect_outliers(df, columns='column', anomaly_method='dbscan', eps=0.5, min_samples=5)
# Example usage of display_logs
display_logs()
# Example usage of remove_chars
# Apply the remove_chars function to the 'Column' (keeping one space between words)
df_cleaned_spaces = remove_chars(df, columns=['Column'], strip_all=False)
# Apply the remove_chars function with strip_all=True to the 'Column' (removing all extra spaces)
df_cleaned_strip_all = remove_chars(df, columns=['Column'], strip_all=True)
# Apply the remove_chars function with custom characters (e.g., spaces) to the 'XYZ' columns
df_cleaned_custom_chars = remove_chars(df, columns=['Column'], custom_characters=' ')
# Example usage of reformat
# Apply formatting from 'reference_column' to 'target_column'
df_reformatted = reformat(df, target_column='target_column', reference_column='reference_column')
# Example usage of scale_data
# Apply Min-Max Scaling to columns 'A' and 'B'
df_scaled = scale_data(df, method='min-max', columns=['A', 'B'])
# Example usage of undo
# Undo the most recent change
df = undo()
# Example usage of Dataset 'sample_data'
# Load the Sample Data dataset
df = sample_data()
# Example usage of convert_unit
# Convert 'Distance' column from meters to nautical miles
converted_df = convert_unit(df, column='Distance', unit_category='length', from_unit='m', to_unit='nmi',)
# Convert 'Temperature' column from Fahrenheit to Kelvin
converted_df_temp = convert_unit(df, column='Temperature', unit_category='temperature', from_unit='F', to_unit='K')
# Example usage of resize_dataset
# Expand a small dataset to 1000 rows using statistical methods
expanded_df = resize_dataset(df, target_size=1000, method='expand')
# Reduce a large dataset to 500 rows while preserving diversity
reduced_df = resize_dataset(df, target_size=500, method='reduce')
# Balance classes in a classification dataset
balanced_df = resize_dataset(df, target_size=800, balance_classes=True, target_column='target')
# Generate synthetic data while preserving correlations between variables
synthetic_df = resize_dataset(df, target_size=1200, method='expand', preserve_correlation=True)
# Reduce dataset size while removing outliers
clean_df = resize_dataset(df, target_size=600, method='reduce', remove_outliers=True, outlier_threshold=2.5)
# Example usage of reduce_vif
reduce_vif = reduce_vif(data=data_dict, threshold=3.0, handle_na='fill', return_format='json')
# Example usage of compress
compressed_df = compress(df)
License
This package is licensed under the MIT License. See the LICENSE file for more details.
Contact
For issues or questions, please contact amynahreimoo@gmail.com.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pyspan-0.5.4.tar.gz.
File metadata
- Download URL: pyspan-0.5.4.tar.gz
- Upload date:
- Size: 3.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b054144a9441775bfc7175b2f260f8950ae91eba9dd7619618d0f22040a0fa0b
|
|
| MD5 |
cf4cbfd95e5e8243ab2572b978d56784
|
|
| BLAKE2b-256 |
0ae9778d5c71c9e04ba9e39e3f4f7293d66287c411f417e71e4af9696e96d843
|
File details
Details for the file pyspan-0.5.4-py3-none-any.whl.
File metadata
- Download URL: pyspan-0.5.4-py3-none-any.whl
- Upload date:
- Size: 4.0 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cf452be2306821fb1f7ea6eededbdab03f306772f2f44550a14f5974eadaff2e
|
|
| MD5 |
7bc3193f87badeb7b1dfee081e2c1d9c
|
|
| BLAKE2b-256 |
e77cb02f62cdf79259a5e6952c1815fe61518776e2d2f88a2469a1068b3db051
|