Skip to main content

Data cleansing tools for Internal Auditors

Project description

Introduction to Pydit

Pydit is a library of data wrangling tools aimed to internal auditors
specifically for our use cases.

This library is also a learning exercise for me on how to create a package, build documentation & tests, and publish it.
Code quality varies, and I don't commit to keep backward compatibility (see below how I use it) So, use it at your own peril!
If, despite all that, you wish to contribute, feel free to get in touch.

Shout out: Pydit takes ideas (and some code) from Pyjanitor, an awesome library.
Check it out!

Why a dedicated library for auditors?

The problem Pydit solves is that a big part of our audit tests have to do with basic data quality checks (e.g. find duplicates or blanks) as they may flag potential fraud or systemic errors.

But to do those check I often end up pasting snippets from internet or reusing code from previous audits with no consistency or tests done.

Libraries like pyjanitor do a great job, however:

a) require installation that often is not allowed in your environment

b) tend to be very compact and non verbose (e.g. use method chaining), and

c) are difficult to review/verify.

What I really need is: a) easy to review code, both code and execution (even for non-programmers)

b) portable, minimal dependencies, pure python, drop-in module ideally.

c) performance is ultimately secondary to readability and repeatability.

Pydit follows these principles:

  1. Functions should be self-standing with minimal imports/dependencies.

The auditor should be able to import or copy paste only a specfic module into the project to perform a particular the audit test. That makes it easier to undertand, customise, review. Plus, it removes dependencies of future versions of pydit. Note that anyway, we need to file the actual code exactly as it was used during the audit.

  1. Functions should include verbose logging, short of debug level.

  2. Focus on documentation, tests and simple code, less concerns on performance.

  3. No method chaining, in interest of source code readability.

While Pyjanitor is great and its method chaining approach is elegant, I've found the good old "step by step" works better for documenting the test, and explaining to reviewers or newbies.

  1. Returns a new transformed copy of the object, code does not mutate the input object(s). Any previous inplace=True parameter is deprecated and I will remove in future versions.

Quick start

import pandas as pd
from pydit import start_logging_info # sets up nice logging params with rotation
from pydit import profile_dataframe  # runs a few descriptive analysis on a df
from pydit import cleanup_column_names # opinionated cleanup of column names


logger = start_logging_info()
logger.info("Started")

The logger feature is used extensively by default, aiming to generate a human readable audit log to be included in workpapers.

I recommend importing individual functions so you can copy them locally to your project folder and just change the import command to point to the local module, that way you freeze the version and reduce dependencies.

df=pd.read_excel("mydata.xlsx")

df_profile= profile_dataframe(df) # will return a df with summary statistics

# you may realise the columns from excel are all over the place with cases and
# special chars

df_clean= cleanup_column_names(df) 

df_deduped=check_duplicates(df_clean, columns=["customer_id","last_update_date"],ascending=[True,False],keep="first",indicator=True, also_return_non_duplicates=True)

# you will get a nice output with the report on duplicates, retaining the last
# modification entry (via the pre-sort descending by date) and returning 
# the non-duplicates,  
# It also brings a boolean column flagging those that had a duplication removed.

Requires

  • python >=3.13 (Should work by and large in 3.10 onwards, but I test in 3.13)
  • pandas
  • numpy
  • openpyxl
  • matplotlib (for the ocassional plot, e.g. Benford)

Installation

pip install pydit-jceresearch

(not available in anaconda yet)

Documentation

Documentation can be found here

Dev Install

git clone https://github.com/jceresearch/pydit.git
pip install -e .

This project uses:

  • pylint for linting
  • black for style
  • pytest for testing
  • sphinx for documentation in RTD
  • myst_parser is a requirement for RTD too
  • poetry for packaging

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydit_jceresearch-0.1.8.tar.gz (51.6 kB view details)

Uploaded Source

Built Distribution

pydit_jceresearch-0.1.8-py3-none-any.whl (62.9 kB view details)

Uploaded Python 3

File details

Details for the file pydit_jceresearch-0.1.8.tar.gz.

File metadata

  • Download URL: pydit_jceresearch-0.1.8.tar.gz
  • Upload date:
  • Size: 51.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.12.7 Linux/6.5.0-1025-azure

File hashes

Hashes for pydit_jceresearch-0.1.8.tar.gz
Algorithm Hash digest
SHA256 49bf17175215d9471917d03a4d80a71f8fd749cc3e9e65add499e4315956c052
MD5 ee46b1cfaa3a10c51156cd254c7828b8
BLAKE2b-256 8c4dcbb0ecd5b7afdd179fb4f6050f0d41b4ccffbb79fd15583287e4810b4ff8

See more details on using hashes here.

File details

Details for the file pydit_jceresearch-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: pydit_jceresearch-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 62.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.12.7 Linux/6.5.0-1025-azure

File hashes

Hashes for pydit_jceresearch-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 8e08a0522c9902c0bd3f295264f5c4baa5b11f359b5cea2a34d634f98d60b88c
MD5 8faee7a6f7fd4738ce7176cd599192ca
BLAKE2b-256 235b6a64c491a1edf80197f7119003ad2c572a259eea282ce000ed659ce7e97f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page