Skip to main content

Implementation of ruleset covering algorithms for explainable machine learning

Project description

# wittgenstein

_And is there not also the case where we play and - make up the rules as we go along?
- Ludwig Wittgenstein_

This module implements two iterative coverage-based ruleset algorithms: IREP and RIPPERk.

Performance is similar to sklearn's DecisionTree CART implementation (see [Performance Tests](

For algorithm details, see my medium post or the papers below in _Useful References_.

## Installation

To install, use
$ pip install wittgenstein

To uninstall, use
$ pip uninstall wittgenstein

## Usage

Usage syntax is similar to sklearn's. The current version, however, does require that data be passed in as a Pandas DataFrame.

Once you have loaded and split your data...
>>> import pandas as pd
>>> df = pd.read_csv(dataset_filename)
>>> from sklearn.model_selection import train_test_split # or any other mechanism you want to use for data partitioning
>>> train, test = train_test_split(df, test_size=.33)
We can fit a ruleset classifier using RIPPER or IREP:
>>> import wittgenstein as lw
>>> ripper_clf = lw.RIPPER() # Or irep_clf = lw.IREP() to build a model using IREP
>>>, class_feat='Party') # Or you can call .fit with params train_X, train_y. See docstrings for hyperparameter options.
>>> ripper_clf
<RIPPER object with fit ruleset (k=2, prune_size=0.33, dl_allowance=64)> # Hyperparameter details available in the docstrings and medium post

Access the underlying trained model with the ruleset_ attribute. A ruleset is a disjunction of conjunctions -- 'V' represents 'or'; '^' represents 'and'.
>>> ripper_clf.ruleset_
<Ruleset object: [physician-fee-freeze=n] V [synfuels-corporation-cutback=y^adoption-of-the-budget-resolution=y^anti-satellite-test-ban=n]>
To score our fit model:
>>> test_X = test.drop(class_feat, axis=1)
>>> test_y = test[class_feat]
>>> ripper_clf.score(test_X, test_y)
Default scoring metric is accuracy. You can pass in alternate scoring functions, including those available through sklearn:
from sklearn.metrics import precision_score, recall_score
>>> precision = clf.score(X_test, y_test, precision_score)
>>> recall = clf.score(X_test, y_test, recall_score)
>>> print(f'precision: {precision} recall: {recall})
precision: 0.9914..., recall: 0.9953...
To perform predictions:
>>> ripper_clf.predict(new_data)[:5]
[True, True, False, True, False]
We can also ask our model to tell us why it made each positive prediction that it did:
>>> ripper_clf.predict(new_data)[:5]
([True, True, False, True, True]
[<Rule object: [physician-fee-freeze=n]>],
[<Rule object: [physician-fee-freeze=n]>,
<Rule object: [synfuels-corporation-cutback=y^adoption-of-the-budget-resolution=y^anti-satellite-test-ban=n]>], # This example met multiple sufficient conditions for a positive prediction
[<Rule object: [physician-fee-freeze=n]>],

## Useful references
- My medium post about the package (coming soon)
- [Furnkrantz-Widmer IREP paper](
- [Cohen's RIPPER paper](
- [Partial decision trees](
- [C4.5 paper including all the gory details on MDL](

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for wittgenstein, version 0.1.1
Filename, size File type Python version Upload date Hashes
Filename, size wittgenstein-0.1.1-py3.6.egg (89.6 kB) File type Egg Python version 3.6 Upload date Hashes View hashes
Filename, size wittgenstein-0.1.1-py3-none-any.whl (41.7 kB) File type Wheel Python version py3 Upload date Hashes View hashes
Filename, size wittgenstein-0.1.1.tar.gz (19.8 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page