Skip to main content

Data cleansing tools for Internal Auditors

Project description

Introduction to Pydit

Pydit is a library of data wrangling tools for use by internal auditors specifically for our typical use cases, see below explanation.

This library is also a learning exercise for me on how to create a package, build documentation & tests, and publish it. Code quality varies, and due to its main use case I don't commit to keep backward compatibility (see below) So, use it at your own peril! If, despite all that, you wish to contribute, feel free to get in touch.

Shout out: Pydit takes ideas (and some code) from Pyjanitor, an awesome library.
Check it out!

Why a dedicated library for auditors?

The problem Pydit tries to solve is that all these cleanup and checks (e.g. extract duplicates) snippets are quite important for our work and start to crop up everywhere, often pasted from internet or from recent version used in another script with no consistency or tests.

On the other hand, libraries like pyjanitor do a great job but a) require installation that often is not allowed in your environment b) tend to be compact and non verbose (and use method chaining) and c) are difficult to verify given the high complexity of the library overall.

For internal audit tests, what we really need is very verbose and easy to understand code and outputs, so it is almost self explanatory and easy to review. Most of the time, performance is secondary. We just need it to run a few times for the duration of the audit.

This leads to Pydit following these principles:

  1. Functions should be self-standing with minimal imports/dependencies.

The auditor should be able to import any individual module to use only those functions in the audit test. That makes it easier to undertand, document and peer-review. Also, it reduces dependencies of future versions of pydit. Typically, we need file the code used as it was ran during the audit.

  1. Functions include verbose logging to explain what is going on. Another feature specifically useful for the Internal Audit use case.

  2. Focus on documentation, tests, and simple code, less concern on performance.

  3. No method chaining, in interest of source code readability.

Pyjanitor is great and its chaining approach is elegant and compact. Definitely one to have in the toolbox. However, I have found it better for documenting the audit test, to check and show all the intermediate steps/results.

  1. The default behaviour is to return a new or a transformed copy of the object and not mutate the input object(s). The "inplace=True" option should be available if feasible.

Quick start

import pandas as pd
from pydit import start_logging_info # sets up nice logging params with rotation
from pydit import profile_dataframe  # runs a few descriptive analysis on a df
from pydit import cleanup_column_names # opinionated cleanup of column names


logger = start_logging_info()
logger.info("Started")

The logger feature is used extensively by default, aiming to generate a human readable audit log to be included in workpapers.

I recommend importing individual functions so you can copy them locally to your project folder and just change the import command to point to the local module, that way you freeze the version and reduce dependencies.

df=pd.read_excel("mydata.xlsx")

df_profile= profile_dataframe(df) # will return a df with summary statistics

# you may realise the columns from excel are all over the place with cases and
# special chars

cleanup_column_names(df,inplace=True) # use of inplace, otherwise it returns a new copy

df_deduped=check_duplicates(df, columns=["customer_id","last_update_date"],ascending=[True,False],keep="first",indicator=True, also_return_non_duplicates=True)

# you will get a nice output with the report on duplicates, retaining the last
# modification entry (via the pre-sort descending by date) and returning 
# the non-duplicates,  
# It also brings a boolean column flagging those that had a duplication removed.


Requires

  • python >=3.11 (Should work by and large in 3.10, but tests pass on 3.11)
  • pandas
  • numpy
  • openpyxl
  • matplotlib (for the ocassional plot, e.g. Benford)

Installation

pip install pydit-jceresearch

(not available in anaconda yet)

Documentation

Documentation can be found here

Dev Install

git clone https://github.com/jceresearch/pydit.git
pip install -e .

This project uses:

  • pylint for linting
  • black for style
  • pytest for testing
  • sphinx for documentation in RTD
  • myst_parser is a requirement for RTD too
  • poetry for packaging.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydit_jceresearch-0.1.4.tar.gz (46.8 kB view details)

Uploaded Source

Built Distribution

pydit_jceresearch-0.1.4-py3-none-any.whl (57.2 kB view details)

Uploaded Python 3

File details

Details for the file pydit_jceresearch-0.1.4.tar.gz.

File metadata

  • Download URL: pydit_jceresearch-0.1.4.tar.gz
  • Upload date:
  • Size: 46.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.0 CPython/3.11.0 Linux/6.2.0-1015-azure

File hashes

Hashes for pydit_jceresearch-0.1.4.tar.gz
Algorithm Hash digest
SHA256 b9e9e48c20dfe0a19cf70ce54ee277f201230c797a298e583803a555fa3d8e2e
MD5 f45b4d88124b25d6ed966f4ffcdee293
BLAKE2b-256 194dad7a89728a1ea2276f2925f16dcf5f4aca7431bfa72c98aa4f61776bbf81

See more details on using hashes here.

File details

Details for the file pydit_jceresearch-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: pydit_jceresearch-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 57.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.0 CPython/3.11.0 Linux/6.2.0-1015-azure

File hashes

Hashes for pydit_jceresearch-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 3c09abd96bb4fdac79da5dc6b3ee6cd0e3b9ba4ab2fa0d6c45308970a461953d
MD5 ec0aa7ebebcdf6cc963b16ee5854a24c
BLAKE2b-256 181f098146b5d236226a7c4c8f1d220a268e63e58075fd9a4dbd24e78ef9ef14

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page