Skip to main content

this package contains functionality for reporting on how your ML model is making decisions

Project description

MODEL INTERPRETER


model_interpreter returns feature importance values for a single row model prediction with functionality to:

  • handle regression and binary / multiclass classification models
  • sort features by importance
  • map feature names to more interpretable names
  • aggregate feature importance's across features
  • handle categorical data within the input features

'model_interpreter' uses SHAP to calculate the single row feature importance values.

The package tries to fit one of the three SHAP explainers below in the following order;

  • TreeExplainer
  • LinearExplainer
  • KernelExplainer

Here is a simple example of generating single row feature importance's for a classification model;

from sklearn.ensemble import RandomForestClassifier

from sklearn.datasets import make_classification
from model_interpreter.interpreter import ModelInterpreter

# generate a classification dataset
X, y = make_classification(
n_samples=1000, 
n_features=4, n_informative=2,
 n_redundant=0, random_state=0, 
shuffle=False
)

# fit a model
clf = RandomForestClassifier(max_depth=2, random_state=0)
clf.fit(X, y)

# fit model interpreter to model
feature_names = ["feature1", "feature2", "feature3", "feature4"]
model_interpreter = ModelInterpreter(feature_names)
model_interpreter.fit(clf)

# return feature contribution importances for a single row
single_row = X.head(1)
contribution_list = single_model_contribution.transform(
single_row, return_type="name_value_dicts"
)

print(contribution_list)

Which will return the following output:

[{'Name': 'feature2', 'Value': -0.349129583}, {'Name': 'feature1', 'Value': -0.0039231513}, {'Name': 'feature4', 'Value': 0.0031653932}, {'Name': 'feature3', 'Value': 0.0013787609}]

Installation

The easiest way to get model_interpreter is directly from pypi with;

pip install model_interpreter

Examples

To help get started there are example notebooks in the examples folder in the repo that show how to use each transformer.

To open the example notebooks in binder click here or click on the launch binder shield above and then click on the directory button in the side bar to the left to navigate to the specific notebook.

Issues

For bugs and feature requests please open an issue.

Build and test

The test framework we are using for this project is pytest. To build the package locally and run the tests follow the steps below.

First clone the repo and move to the root directory;

git clone https://github.com/lvgig/model_interpreter.git
cd model_interpreter

Next install model_interpreter and development dependencies;

pip install . -r requirements-dev.txt

Finally run the test suite with pytest;

pytest

Contribute

model_interpreter is under active development, we're super excited if you're interested in contributing!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_interpreter-1.1.0.tar.gz (14.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_interpreter-1.1.0-py3-none-any.whl (9.0 kB view details)

Uploaded Python 3

File details

Details for the file model_interpreter-1.1.0.tar.gz.

File metadata

  • Download URL: model_interpreter-1.1.0.tar.gz
  • Upload date:
  • Size: 14.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for model_interpreter-1.1.0.tar.gz
Algorithm Hash digest
SHA256 9aa24cb02ea2575fb0de534fc0c0b68de7f6ec2920eee5b310984d66f031bf8f
MD5 c40569494a7f302a7390128747db4f07
BLAKE2b-256 b379c329bf370940068f94377b9568060b76531696600999ef2d3578b59fd566

See more details on using hashes here.

Provenance

The following attestation bundles were made for model_interpreter-1.1.0.tar.gz:

Publisher: release.yml on lvgig/model_interpreter

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file model_interpreter-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for model_interpreter-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 19293609df282be588779f3cb3f9b727f891557c4a0183363a8067e567fdf13e
MD5 f1b834db8b934893981cb97f8ce42d9c
BLAKE2b-256 172fed44f1a43c3f8e60d8ced8148148e5db92c2c093a63e2b2203df7e48d761

See more details on using hashes here.

Provenance

The following attestation bundles were made for model_interpreter-1.1.0-py3-none-any.whl:

Publisher: release.yml on lvgig/model_interpreter

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page