Skip to main content

Demo of a Caipi-like system for explanatory interactive learning.

Project description

Coipee

A demo implementation of the Caipi explanatory interactive learning algorithm. Coipee implements a model which one can query to retrieve uncertain instances on which it lacks confidence, alongside an explanation of the model prediction. The user can then correct such explanation, then feed it back to Coipee to trigger an additional training guided by the explanation.

This implementation leverages feature masks as explanation, i.e., masks which can enable or disable input features.

Quickstart

Install through pip and venv:

mkvirtualenv -p python3.12 coipee

pip install coipee

Usage

Coipee revolves around a Coipee object:

barman = Coipee(
    model=base_model,  # the model to explain, e.g. a neural network
    fit_model=fit_model,  # the function to train the model, invoked after a correction
    pool=data_train,   # pool of data to measure the model's uncertainty, also used for query
    pool_labels=labels_train  # labels of the pool
)

A typical use involves querying the model for a number of uncertain instances

number_of_instances = 10
artifact = barman.query(10)
print(artifact.explanation)

and retrieve a feature mask: features important to the model are marked as True, while others as False. We can also threshold importance at different levels: the higher the threshold, the higher the required importance to mark a feature as important:

artifact = barman.query(10, threshold=0.01)
print(artifact.explanation)

Once we have our explanation, we can correct it by marking some important features as not important, and vice versa:

import copy

corrected_artifact = copy.deepcopy(artifact)

corrected_artifact.explanation[:] = False
corrected_artifact.explanation[[0, 1, 2]] = True

Here, we have simply said to the model that actually, only the features 0, 1, 2 are actually important. We can also directly retrieve differences between artifacts through the diff method:

print(f"Difference: {artifact.diff(corrected_artifact)}")

Now that we have corrected the explanation, we can feed it back to the model:

barman.stack_correction(corrected_artifact)  # adds the correction to correction stack of the model
barman.correct_model()  # triggers a training phase

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

coipee-0.0.2.tar.gz (21.5 kB view details)

Uploaded Source

Built Distribution

coipee-0.0.2-py3-none-any.whl (19.9 kB view details)

Uploaded Python 3

File details

Details for the file coipee-0.0.2.tar.gz.

File metadata

  • Download URL: coipee-0.0.2.tar.gz
  • Upload date:
  • Size: 21.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.3

File hashes

Hashes for coipee-0.0.2.tar.gz
Algorithm Hash digest
SHA256 2309eb804fbf49ced741ed3f89c635247ae8b97d1b52dc21263ba6ffae50bab0
MD5 eca3c81a5ae5aae41a3a39d185dd3e4f
BLAKE2b-256 29812ef7a97afcaf134d81b41ba96aac137760ed37ce4dd0ac007b3c7bab68ee

See more details on using hashes here.

File details

Details for the file coipee-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: coipee-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 19.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.3

File hashes

Hashes for coipee-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 35dc4395ba48b518a65e7ab5c99f4b0bc5c3a29602baa7bbba3172f8498fb466
MD5 de7aec8286f86ed3198111a446f5467c
BLAKE2b-256 acac47657331e2ec187a19cdd45657364d7862e7308fc29249904ad1322c36b7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page