Utility functions for BAAR developers
Project description
Baarutil
This Custom Library is specifically created for the developers/users who use BAAR. Which is a product of Allied Media Inc.
Authors:
Souvik Roy sroy-2019
Zhaoyu (Thomas) Xu xuzhaoyu
Additional Info:
The string structure that follows is a streamlined structure that the developers/users follow throughout an automation workflow designed in BAAR:
"Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world"
Available functions and the examples are listed below:
1. read_convert(string), Output Data Type: list of dictionary
Attributes:
i. string: Input String, Data Type = String
Input: "Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world"
Output: [{"Column_1":"abc", "Column_2":"def"}, {"Column_1":"hello", "Column_2":"world"}]
2. write_convert(input_list), Output Data Type: string
Attributes:
i. input_list: List that contains the Dictionaries of Data, Data Type = List
Input: [{"Column_1":"abc", "Column_2":"def"}, {"Column_1":"hello", "Column_2":"world"}]
Output: "Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world"
3. string_to_df(string, rename_cols, drop_dupes), Output Data Type: pandas DataFrame
Attributes:
i. string: Input String, Data Type = String
ii. rename_cols: Dictionary that contains old column names and new column names mapping, Data Type = Dictionary, Default Value = {}
iii. drop_dupes: Drop duplicate rows from the final dataframe, Data Type = Bool, Default Value = False
Input: "Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world"
Output:
Column_1 | Column_2 |
---|---|
abc | def |
hello | world |
4. df_to_string(input_df, rename_cols, drop_dupes), Output Data Type: string
Attributes:
i. input_df: Input DataFrame, Data Type = pandas DataFrame
ii. rename_cols: Dictionary that contains old column names and new column names mapping, Data Type = Dictionary, Default Value = {}
iii. drop_dupes: Drop duplicate rows from the final dataframe, Data Type = Bool, Default Value = False
Input:
Column_1 | Column_2 |
---|---|
abc | def |
hello | world |
Output: "Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world"
5. df_to_listdict(input_df, rename_cols, drop_dupes), Output Data Type: list
Attributes:
i. input_df: Input DataFrame, Data Type = pandas DataFrame
ii. rename_cols: Dictionary that contains old column names and new column names mapping, Data Type = Dictionary, Default Value = {}
iii. drop_dupes: Drop duplicate rows from the final dataframe, Data Type = Bool, Default Value = False
Input:
Column_1 | Column_2 |
---|---|
abc | def |
hello | world |
Output: [{"Column_1":"abc", "Column_2":"def"}, {"Column_1":"hello", "Column_2":"world"}]
6. decrypt_vault(encrypted_message, config_file), Output Data Type: string
Attributes:
i. encrypted_message: Encrypted Baar Vault Data, Data Type = string
ii. config_file: Keys, that needs to be provided by Allied Media.
This function can also be called from a Robot Framework Script by importing the baarutil library and using Decrypt Vault keyword. Upon initiation of this function, this will set the Log Level of the Robot Framework script to NONE for security reasons. The Developers have to use Set Log Level INFO in the robot script in order to restart the Log.
Input: <<Encrypted Text>>
Output: <<Decrypted Text>>
7. generate_password(password_size, upper, lower, digits, symbols, exclude_chars), Output Data Type: string
Attributes:
i. password_size: Password Length, Data Type = int, Default Value = 10, (Should be greater than 4)
ii. upper: Are Uppercase characters required?, Data Type = Bool (True/False), Default Value = True
iii. lower: Are Lowercase characters required?, Data Type = Bool (True/False), Default Value = True
iv. digits: Are Digits characters required?, Data Type = Bool (True/False), Default Value = True
v. symbols: Are Symbols/ Special characters required?, Data Type = Bool (True/False), Default Value = True
vi. exclude_chars: List of characters to be excluded from the final password, Data Type = List, Default Value = []
This function can also be called from a Robot Framework Script by importing the baarutil library and using Generate Password keyword. Upon initiation of this function, this will set the Log Level of the Robot Framework script to NONE for security reasons. The Developers have to use Set Log Level INFO in the robot script in order to restart the Log.
Input (Optional): <<Password Length>>, <<Uppercase Required?>>, <<Lowercase Required?>>, <<Digits Required?>>, <<Symbols Required?>>
Output: <<Password String>>
8. generate_report(data_df, file_name, path, file_type, detailed_report, replace_old_file, final_file_name_case, time_stamp, encoding, index, engine, max_new_files_count, sheet_name), Output Data Type: Bool or, Dictionary (based on the input value of detailed_report)
Attributes:
i. data_df: Input Dataframe, Data Type = pandas.DataFrame()
ii. file_name: Final file name, Data Type = str
iii. path: Final file path, Data Type = str, Default Value = Current working directory
iv. file_type: Final file extension/ file type, Data Type = str, Default Value = 'csv', Available Options = 'csv' or, 'xlsx'
v. detailed_report: Is detailed status of the run required?, Data Type = Bool (True/False), Default Value = False
vi. replace_old_file: Should the program replace the old files after each run? or keep creating new files (only works if the final file name is the same each time), Data Type = Bool (True/False)
vii. final_file_name_case: Font case of the final file name, Data Type = str, Default Value = 'unchanged', Available Options = 'upper' or, 'lower' or, 'unchanged'
viii. time_stamp: Time stamp at the end of the filename to make each file unique, Data Type = Bool (True/False), Default Value = False
ix. encoding: Encoding of the file, Data Type = str, Default Value = 'utf-8'
x. index: Dataframe index in the final file, Data Type = Bool (True/False), Default Value = False
xi. engine: Engine of the excelwriter for pandas to_excel function, Data Type = str, Default Value = 'openpyxl'
xii. max_new_files_count: Count of maximum new files if the replace_old_file is False, Data Type = int, Default Value = 100
xiii. sheet_name: Sheet name in the final excel, Data Type = str, Default Value = 'Sheet1'
This function can also be called from a Robot Framework Script by importing the baarutil library and using Generate Report.
Input: Mandetory arguments -> data_df, file_name
Output (if detailed_report==False): True/ False
Output (if detailed_report==True): {'file_path': <<Absolute path of the newly generated file>>, 'file_generation_status': True/ False, 'message': <<Detailed message>>, 'start_time': <<Start time when the function was initiated>>, 'end_time': <<End time when the function was completed>>}
9. string_to_report(data, file_name, path, file_type, detailed_report, replace_old_file, final_file_name_case, time_stamp, encoding, index, engine, max_new_files_count, sheet_name), Output Data Type: Bool or, Dictionary (based on the input value of detailed_report, rename_cols, drop_dupes)
Attributes:
i. data: Input BAAR string, Data Type = str
ii. file_name: Final file name, Data Type = str
iii. path: Final file path, Data Type = str, Default Value = Current working directory
iv. file_type: Final file extension/ file type, Data Type = str, Default Value = 'csv', Available Options = 'csv' or, 'xlsx'
v. detailed_report: Is detailed status of the run required?, Data Type = Bool (True/False), Default Value = False
vi. replace_old_file: Should the program replace the old files after each run? or keep creating new files (only works if the final file name is the same each time), Data Type = Bool (True/False)
vii. final_file_name_case: Font case of the final file name, Data Type = str, Default Value = 'unchanged', Available Options = 'upper' or, 'lower' or, 'unchanged'
viii. time_stamp: Time stamp at the end of the filename to make each file unique, Data Type = Bool (True/False), Default Value = False
ix. encoding: Encoding of the file, Data Type = str, Default Value = 'utf-8'
x. index: Dataframe index in the final file, Data Type = Bool (True/False), Default Value = False
xi. engine: Engine of the excelwriter for pandas to_excel function, Data Type = str, Default Value = 'openpyxl'
xii. max_new_files_count: Count of maximum new files if the replace_old_file is False, Data Type = int, Default Value = 100
xiii. sheet_name: Sheet name in the final excel, Data Type = str, Default Value = 'Sheet1'
xiv. rename_cols: Dictionary that contains old column names and new column names mapping, Data Type = Dictionary, Default Value = {}
xv. drop_dupes: Drop duplicate rows from the final dataframe, Data Type = Bool, Default Value = False
This function can also be called from a Robot Framework Script by importing the baarutil library and using String To Report.
Input: Mandetory arguments -> data (BAAR String: Column_1__=__abc__$$__Column_2__=__def__::__Column_1__=__hello__$$__Column_2__=__world), file_name
Output (if detailed_report==False): True/ False
Output (if detailed_report==True): {'file_path': <<Absolute path of the newly generated file>>, 'file_generation_status': True/ False, 'message': <<Detailed message>>, 'start_time': <<Start time when the function was initiated>>, 'end_time': <<End time when the function was completed>>}
10. clean_directory(path, remove_directory), Output Data Type: boolean
Attributes:
i. path: Absolute paths of the target directories seperated by "|", Data Type = str
ii. remove_directory: Should the nested directories be deleted?, Data Type = Bool (True/False), Default Value = False
This function can also be called from a Robot Framework Script by importing the baarutil library and using Clean Directory keyword.
Input: "C:/Path1|C:/Path2|C:/Path3"
Output: True/False
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file baarutil-1.5.0.tar.gz
.
File metadata
- Download URL: baarutil-1.5.0.tar.gz
- Upload date:
- Size: 10.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.5.0 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d4809fe7d71798d6228cc66da2dde2a58dffe3b7492876765c8d0280069dc733 |
|
MD5 | 5a825baac4bedd64444e04d53052cb06 |
|
BLAKE2b-256 | a912a9c15db45d219c212a1bcab498c9070367d8e41eea5c7b332108c3de9d76 |