Skip to main content

Implementation of ruleset covering algorithms for explainable machine learning

Project description

# wittgenstein

_And is there not also the case where we play and--make up the rules as we go along?
-Ludwig Wittgenstein_

![the duck-rabbit](

## Summary

This package implements two iterative coverage-based ruleset algorithms: IREP and RIPPERk.

Performance is similar to sklearn's DecisionTree CART implementation (see [Performance Tests](

For explanation of the algorithms, see my article in Towards Data Science, or the papers below, in _Useful References_.

## Installation

To install, use
$ pip install wittgenstein

To uninstall, use
$ pip uninstall wittgenstein

## Usage

Usage syntax is similar to sklearn's. The current version, however, does require that data be passed in as a Pandas DataFrame.

Once you have loaded and split your data...
>>> import pandas as pd
>>> df = pd.read_csv(dataset_filename)
>>> from sklearn.model_selection import train_test_split # or any other mechanism you want to use for data partitioning
>>> train, test = train_test_split(df, test_size=.33)
We can fit a ruleset classifier using RIPPER or IREP:
>>> import wittgenstein as lw
>>> ripper_clf = lw.RIPPER() # Or irep_clf = lw.IREP() to build a model using IREP
>>>, class_feat='Party') # Or call .fit with params train_X, train_y
>>> ripper_clf
<RIPPER object with fit ruleset (k=2, prune_size=0.33, dl_allowance=64)> # Hyperparameter details available in the docstrings and medium post

Access the underlying trained model with the ruleset_ attribute. A ruleset is a disjunction of conjunctions -- 'V' represents 'or'; '^' represents 'and'.
>>> ripper_clf.ruleset_
<Ruleset object: [physician-fee-freeze=n] V [synfuels-corporation-cutback=y^adoption-of-the-budget-resolution=y^anti-satellite-test-ban=n]>
To score our fit model:
>>> test_X = test.drop(class_feat, axis=1)
>>> test_y = test[class_feat]
>>> ripper_clf.score(test_X, test_y)
Default scoring metric is accuracy. You can pass in alternate scoring functions, including those available through sklearn:
from sklearn.metrics import precision_score, recall_score
>>> precision = clf.score(X_test, y_test, precision_score)
>>> recall = clf.score(X_test, y_test, recall_score)
>>> print(f'precision: {precision} recall: {recall})
precision: 0.9914..., recall: 0.9953...
To perform predictions:
>>> ripper_clf.predict(new_data)[:5]
[True, True, False, True, False]
We can also ask our model to tell us why it made each positive prediction that it did:
>>> ripper_clf.predict(new_data)[:5]
([True, True, False, True, True]
[<Rule object: [physician-fee-freeze=n]>],
[<Rule object: [physician-fee-freeze=n]>,
<Rule object: [synfuels-corporation-cutback=y^adoption-of-the-budget-resolution=y^anti-satellite-test-ban=n]>], # This example met multiple sufficient conditions for a positive prediction
[<Rule object: [physician-fee-freeze=n]>],

## Issues
If you encounter any issues, or if you have feedback or improvement requests for how wittgenstein could be made more helpful for you, please post them to [issues](, and I'll respond.

## Contributing
Contributions are welcome! If you are interested in contributing, let me know at or on [linkedin](

## Useful references
- My medium post on IREP, RIPPER, and wittgenstein (coming soon)
- [Furnkrantz-Widmer IREP paper](
- [Cohen's RIPPER paper](
- [Partial decision trees](
- [C4.5 paper including all the gory details on MDL](

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for wittgenstein, version 0.1.4
Filename, size File type Python version Upload date Hashes
Filename, size wittgenstein-0.1.4.tar.gz (20.5 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page