Library to explain a dataset in natural language.
Project description
Scikit-learn friendly library to explain, predict, and steer text models/data.
Also a bunch of utilities for getting started with text data.
Explainable modeling/steering
Model | Reference | Output | Description |
---|---|---|---|
Tree-Prompt | ๐๏ธ, ๐, ๐, ๐, | Explanation + Steering |
Generates a tree of prompts to steer an LLM (Official) |
iPrompt | ๐๏ธ, ๐, ๐, ๐ | Explanation + Steering |
Generates a prompt that explains patterns in data (Official) |
AutoPrompt | ใ คใ ค๐๏ธ, ๐, ๐ | Explanation + Steering |
Find a natural-language prompt using input-gradients |
D3 | ๐๏ธ, ๐, ๐, ๐ | Explanation | Explain the difference between two distributions |
SASC | ใ คใ ค๐๏ธ, ๐, ๐ | Explanation | Explain a black-box text module using an LLM (Official) |
Aug-Linear | ๐๏ธ, ๐, ๐, ๐ | Linear model | Fit better linear model using an LLM to extract embeddings (Official) |
Aug-Tree | ๐๏ธ, ๐, ๐, ๐ | Decision tree | Fit better decision tree using an LLM to expand features (Official) |
QAEmb | ๐๏ธ, ๐, ๐, ๐ | Explainable embedding |
Generate interpretable embeddings by asking LLMs questions (Official) |
KAN | ๐๏ธ, ๐, ๐, ๐ | Small network |
Fit 2-layer Kolmogorov-Arnold network |
๐Demo notebooks โ ๐๏ธ Doc โ ๐ Reference code โ ๐ Research paper
โ We plan to support other interpretable algorithms like RLPrompt, CBMs, and NBDT. If you want to contribute an algorithm, feel free to open a PR ๐
General utilities
Model | Reference |
---|---|
๐๏ธ LLM wrapper | Easily call different LLMs |
๐๏ธ Dataset wrapper | Download minimially processed huggingface datasets |
๐๏ธ Bag of Ngrams | Learn a linear model of ngrams |
๐๏ธ Linear Finetune | Finetune a single linear layer on top of LLM embeddings |
Quickstart
Installation: pip install imodelsx
(or, for more control, clone and install from source)
Demos: see the demo notebooks
Natural-language explanations
Tree-prompt
from imodelsx import TreePromptClassifier
import datasets
import numpy as np
from sklearn.tree import plot_tree
import matplotlib.pyplot as plt
# set up data
rng = np.random.default_rng(seed=42)
dset_train = datasets.load_dataset('rotten_tomatoes')['train']
dset_train = dset_train.select(rng.choice(
len(dset_train), size=100, replace=False))
dset_val = datasets.load_dataset('rotten_tomatoes')['validation']
dset_val = dset_val.select(rng.choice(
len(dset_val), size=100, replace=False))
# set up arguments
prompts = [
"This movie is",
" Positive or Negative? The movie was",
" The sentiment of the movie was",
" The plot of the movie was really",
" The acting in the movie was",
]
verbalizer = {0: " Negative.", 1: " Positive."}
checkpoint = "gpt2"
# fit model
m = TreePromptClassifier(
checkpoint=checkpoint,
prompts=prompts,
verbalizer=verbalizer,
cache_prompt_features_dir=None, # 'cache_prompt_features_dir/gp2',
)
m.fit(dset_train["text"], dset_train["label"])
# compute accuracy
preds = m.predict(dset_val['text'])
print('\nTree-Prompt acc (val) ->',
np.mean(preds == dset_val['label'])) # -> 0.7
# compare to accuracy for individual prompts
for i, prompt in enumerate(prompts):
print(i, prompt, '->', m.prompt_accs_[i]) # -> 0.65, 0.5, 0.5, 0.56, 0.51
# visualize decision tree
plot_tree(
m.clf_,
fontsize=10,
feature_names=m.feature_names_,
class_names=list(verbalizer.values()),
filled=True,
)
plt.show()
iPrompt
from imodelsx import explain_dataset_iprompt, get_add_two_numbers_dataset
# get a simple dataset of adding two numbers
input_strings, output_strings = get_add_two_numbers_dataset(num_examples=100)
for i in range(5):
print(repr(input_strings[i]), repr(output_strings[i]))
# explain the relationship between the inputs and outputs
# with a natural-language prompt string
prompts, metadata = explain_dataset_iprompt(
input_strings=input_strings,
output_strings=output_strings,
checkpoint='EleutherAI/gpt-j-6B', # which language model to use
num_learned_tokens=3, # how long of a prompt to learn
n_shots=3, # shots per example
n_epochs=15, # how many epochs to search
verbose=0, # how much to print
llm_float16=True, # whether to load the model in float_16
)
--------
prompts is a list of found natural-language prompt strings
D3 (DescribeDistributionalDifferences)
from imodelsx import explain_dataset_d3
hypotheses, hypothesis_scores = explain_dataset_d3(
pos=positive_samples, # List[str] of positive examples
neg=negative_samples, # another List[str]
num_steps=100,
num_folds=2,
batch_size=64,
)
SASC
Here, we explain a module rather than a dataset
from imodelsx import explain_module_sasc
# a toy module that responds to the length of a string
mod = lambda str_list: np.array([len(s) for s in str_list])
# a toy dataset where the longest strings are animals
text_str_list = ["red", "blue", "x", "1", "2", "hippopotamus", "elephant", "rhinoceros"]
explanation_dict = explain_module_sasc(
text_str_list,
mod,
ngrams=1,
)
Aug-imodels
Use these just a like a scikit-learn model. During training, they fit better features via LLMs, but at test-time they are extremely fast and completely transparent.
from imodelsx import AugLinearClassifier, AugTreeClassifier, AugLinearRegressor, AugTreeRegressor
import datasets
import numpy as np
# set up data
dset = datasets.load_dataset('rotten_tomatoes')['train']
dset = dset.select(np.random.choice(len(dset), size=300, replace=False))
dset_val = datasets.load_dataset('rotten_tomatoes')['validation']
dset_val = dset_val.select(np.random.choice(len(dset_val), size=300, replace=False))
# fit model
m = AugLinearClassifier(
checkpoint='textattack/distilbert-base-uncased-rotten-tomatoes',
ngrams=2, # use bigrams
)
m.fit(dset['text'], dset['label'])
# predict
preds = m.predict(dset_val['text'])
print('acc_val', np.mean(preds == dset_val['label']))
# interpret
print('Total ngram coefficients: ', len(m.coefs_dict_))
print('Most positive ngrams')
for k, v in sorted(m.coefs_dict_.items(), key=lambda item: item[1], reverse=True)[:8]:
print('\t', k, round(v, 2))
print('Most negative ngrams')
for k, v in sorted(m.coefs_dict_.items(), key=lambda item: item[1])[:8]:
print('\t', k, round(v, 2))
KAN
import imodelsx
from sklearn.datasets import make_classification, make_regression
from sklearn.metrics import accuracy_score
import numpy as np
X, y = make_classification(n_samples=5000, n_features=5, n_informative=3)
model = imodelsx.KANClassifier(hidden_layer_size=64, device='cpu',
regularize_activation=1.0, regularize_entropy=1.0)
model.fit(X, y)
y_pred = model.predict(X)
print('Test acc', accuracy_score(y, y_pred))
# now try regression
X, y = make_regression(n_samples=5000, n_features=5, n_informative=3)
model = imodelsx.kan.KANRegressor(hidden_layer_size=64, device='cpu',
regularize_activation=1.0, regularize_entropy=1.0)
model.fit(X, y)
y_pred = model.predict(X)
print('Test correlation', np.corrcoef(y, y_pred.flatten())[0, 1])
General utilities
Easy baselines
Easy-to-fit baselines that follow the sklearn API.
from imodelsx import LinearFinetuneClassifier, LinearNgramClassifier
# fit a simple one-layer finetune on top of LLM embeddings
m = LinearFinetuneClassifier(
checkpoint='distilbert-base-uncased',
)
m.fit(dset['text'], dset['label'])
preds = m.predict(dset_val['text'])
acc = (preds == dset_val['label']).mean()
print('validation acc', acc)
LLM wrapper
Easy API for calling different language models with caching (much more lightweight than langchain).
import imodelsx.llm
# supports any huggingface checkpoint or openai checkpoint (including chat models)
llm = imodelsx.llm.get_llm(
checkpoint="gpt2-xl", # text-davinci-003, gpt-3.5-turbo, ...
CACHE_DIR=".cache",
)
out = llm("May the Force be")
llm("May the Force be") # when computing the same string again, uses the cache
Data wrapper
API for loading huggingface datasets with basic preprocessing.
import imodelsx.data
dset, dataset_key_text = imodelsx.data.load_huggingface_dataset('ag_news')
# Ensures that dset has a split named 'train' and 'validation',
# and that the input data is contained for each split in a column given by {dataset_key_text}
Related work
- imodels package (JOSS 2021 github) - interpretable ML package for concise, transparent, and accurate predictive modeling (sklearn-compatible).
- Adaptive wavelet distillation (NeurIPS 2021 pdf, github) - distilling a neural network into a concise wavelet model
- Transformation importance (ICLR 2020 workshop pdf, github) - using simple reparameterizations, allows for calculating disentangled importances to transformations of the input (e.g. assigning importances to different frequencies)
- Hierarchical interpretations (ICLR 2019 pdf, github) - extends CD to CNNs / arbitrary DNNs, and aggregates explanations into a hierarchy
- Interpretation regularization (ICML 2020 pdf, github) - penalizes CD / ACD scores during training to make models generalize better
- PDR interpretability framework (PNAS 2019 pdf) - an overarching framewwork for guiding and framing interpretable machine learning
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file imodelsx-1.0.8.tar.gz
.
File metadata
- Download URL: imodelsx-1.0.8.tar.gz
- Upload date:
- Size: 103.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 45800b854689828012175ff36d8e3b2cde7c5e14877b35b827cad63eca0e6d19 |
|
MD5 | 8f5adcf66521ab07c9d6cefb6cecca12 |
|
BLAKE2b-256 | 78e093edec9e93af5f80978f310273614c441d5715e2dafb89f88c6942bf342d |
File details
Details for the file imodelsx-1.0.8-py3-none-any.whl
.
File metadata
- Download URL: imodelsx-1.0.8-py3-none-any.whl
- Upload date:
- Size: 115.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e3a6e2b3529fce691a1daa7544a1acc5948c660db35807593260d8c39253d8cf |
|
MD5 | 1f48628b97b19ad09ee3baf06741a17c |
|
BLAKE2b-256 | 4ef14cc58330abd20b3d5ae7f1cf563dc18d51fa86efe66c7f15305d70398cc4 |