Data cleansing tools for Internal Auditors
Project description
Introduction to Pydit
Pydit is a library of data wrangling tools aimed to internal auditors
specifically for our use cases.
This library is also a learning exercise for me on how to create a package, build documentation & tests, and publish it.
Code quality varies, and I don't commit to keep backward compatibility (see below how I use it) So, use it at your own peril!
If, despite all that, you wish to contribute, feel free to get in touch.
Shout out: Pydit takes ideas (and some code) from Pyjanitor, an awesome library.
Check it out!
Why a dedicated library for auditors?
The problem Pydit solves is that a big part of our audit tests have to do with basic data quality checks (e.g. find duplicates or blanks) as they may flag potential fraud or systemic errors.
But to do those check I often end up pasting snippets from internet or reusing code from previous audits with no consistency or tests done.
Libraries like pyjanitor do a great job, however:
a) require installation that often is not allowed in your environment
b) tend to be very compact and non verbose (e.g. use method chaining), and
c) are difficult to review/verify.
What I really need is: a) easy to review code, both code and execution (even for non-programmers) b) portable, minimal dependencies, pure python, drop-in module ideally. c) performance is ultimately secondary to readability and repeatability.
Pydit follows these principles:
1.Functions should be self-standing with minimal imports/dependencies.
The auditor should be able to import or copy paste only a specfic module into the project to perform a particular the audit test. That makes it easier to undertand, customise, review. Plus, it removes dependencies of future versions of pydit. Note that anyway, we need to file the actual code exactly as it was used during the audit.
2.Functions should include verbose logging, short of debug level.
3.Focus on documentation, tests and simple code, less concerns on performance.
4.No method chaining, in interest of source code readability.
While Pyjanitor is great and its method chaining approach is elegant, I've found the good old "step by step" works better for documenting the test, and explaining to reviewers or newbies.
5.By default return a new or a transformed copy of the object, do not mutate the input object(s). The "inplace=True" option should be available if feasible.
Quick start
import pandas as pd
from pydit import start_logging_info # sets up nice logging params with rotation
from pydit import profile_dataframe # runs a few descriptive analysis on a df
from pydit import cleanup_column_names # opinionated cleanup of column names
logger = start_logging_info()
logger.info("Started")
The logger feature is used extensively by default, aiming to generate a human readable audit log to be included in workpapers.
I recommend importing individual functions so you can copy them locally to your project folder and just change the import command to point to the local module, that way you freeze the version and reduce dependencies.
df=pd.read_excel("mydata.xlsx")
df_profile= profile_dataframe(df) # will return a df with summary statistics
# you may realise the columns from excel are all over the place with cases and
# special chars
cleanup_column_names(df,inplace=True) # use of inplace, otherwise it returns a new copy
df_deduped=check_duplicates(df, columns=["customer_id","last_update_date"],ascending=[True,False],keep="first",indicator=True, also_return_non_duplicates=True)
# you will get a nice output with the report on duplicates, retaining the last
# modification entry (via the pre-sort descending by date) and returning
# the non-duplicates,
# It also brings a boolean column flagging those that had a duplication removed.
Requires
- python >=3.11 (Should work by and large in 3.10, but tests pass on 3.11)
- pandas
- numpy
- openpyxl
- matplotlib (for the ocassional plot, e.g. Benford)
Installation
pip install pydit-jceresearch
(not available in anaconda yet)
Documentation
Documentation can be found here
Dev Install
git clone https://github.com/jceresearch/pydit.git
pip install -e .
This project uses:
pylint
for lintingblack
for stylepytest
for testingsphinx
for documentation in RTDmyst_parser
is a requirement for RTD toopoetry
for packaging
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pydit_jceresearch-0.1.6.tar.gz
.
File metadata
- Download URL: pydit_jceresearch-0.1.6.tar.gz
- Upload date:
- Size: 47.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.11.0 Linux/6.5.0-1016-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3927281bb73cba9f456bd6af3c28bdbf8ce9503d242d6477a903c45701877241 |
|
MD5 | 6e4813d916fc82105d944282747b80aa |
|
BLAKE2b-256 | 3451c13060976ce75c730611a8b785c9253a4b18315fcf2332b6754bf9769d1b |
File details
Details for the file pydit_jceresearch-0.1.6-py3-none-any.whl
.
File metadata
- Download URL: pydit_jceresearch-0.1.6-py3-none-any.whl
- Upload date:
- Size: 57.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.11.0 Linux/6.5.0-1016-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 45bbbb0d7e003d800b37a4f7cefffe18c80bc1dfa9a62af6be63638900eb21a3 |
|
MD5 | dfd2ff862d1723164d40c7b645e508ae |
|
BLAKE2b-256 | 1c8c999d17e458d96148b17fd36afe956019bfeccfd1de7b8dd6b60aa57d8b88 |