An intuitive library to plot evaluation metrics.
Project description
kds - KeyToDataScience Visualization Library
Plot Decile Table, Lift, Gain and KS Statistic charts with single line functions
Just input 'labels' and 'probabilities' to get quick report for analysis
kds.metrics.report(y_test, y_prob)
Report has an argument plot_style
which has multiple plot style options. For more, explore examples !!
kds is the result of a data scientist's humble effort to provide an easy way of visualizing metrics. So that one can focus on the analysis rather than hassling with copy/paste of various visialization functions.
Installation
Installation is simple! Just double check, you have the dependencies Pandas, Numpy and Matplotlib installed.
Then just run:
pip install kds
Or if you want the latest development version, clone this repo and run
python setup.py install
at the root folder.
Examples
Let's dive into using various plots with the sample iris dataset from scikit-learn.
1. Lift Plot
# REPRODUCABLE EXAMPLE
# Load Dataset and train-test split
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn import tree
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,random_state=3)
clf = tree.DecisionTreeClassifier(max_depth=1,random_state=3)
clf = clf.fit(X_train, y_train)
y_prob = clf.predict_proba(X_test)
# The magic happens here
import kds
kds.metrics.plot_lift(y_test, y_prob[:,1])
Yup... That's it. single line functions for detailed visualization.
You can see clearly here that kds.metrics.lift
needs only the actual y_true values and the predicted probabilities to generate the plot. This lets you use anything you want as the classifier, from Random Forest to Keras NNs to XgBoost to any classifier algorithm you want to use.
Want to see more exapmles ??
2. Cumulative Gain Plot
# REPRODUCABLE EXAMPLE
# Load Dataset and train-test split
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn import tree
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,random_state=3)
clf = tree.DecisionTreeClassifier(max_depth=1,random_state=3)
clf = clf.fit(X_train, y_train)
y_prob = clf.predict_proba(X_test)
# The magic happens here
import kds
kds.metrics.plot_cumulative_gain(y_test, y_prob[:,1])
3. KS Statistic Plot
# REPRODUCABLE EXAMPLE
# Load Dataset and train-test split
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn import tree
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,random_state=3)
clf = tree.DecisionTreeClassifier(max_depth=1,random_state=3)
clf = clf.fit(X_train, y_train)
y_prob = clf.predict_proba(X_test)
# The magic happens here
import kds
kds.metrics.plot_ks_statistic(y_test, y_prob[:,1])
4. Decile Table
# REPRODUCABLE EXAMPLE
# Load Dataset and train-test split
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn import tree
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,random_state=3)
clf = tree.DecisionTreeClassifier(max_depth=1,random_state=3)
clf = clf.fit(X_train, y_train)
y_prob = clf.predict_proba(X_test)
# The magic happens here
import kds
kds.metrics.decile_table(y_test, y_prob[:,1])
5. Report
# REPRODUCABLE EXAMPLE
# Load Dataset and train-test split
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn import tree
X, y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,random_state=3)
clf = tree.DecisionTreeClassifier(max_depth=1,random_state=3)
clf = clf.fit(X_train, y_train)
y_prob = clf.predict_proba(X_test)
# The magic happens here
import kds
kds.metrics.report(y_test, y_prob[:,1],plot_style='ggplot')
Choose among multiple plot_style
list using plt.style.available
, to generate quick and beautiful plots.
Contributing to kds
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us. Visit our contributor guidelines.
Happy plotting!
Change Log
================
0.1.3 (15/06/2021)
- Add new parameter:'change_decile' in decile_table to change number of partitions. Updated bug in calling kds.metrics.report.
0.1.1 (27/01/2021)
- Updated Readme and plot styles
0.1.0 (27/01/2021)
- First Release
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file kds-0.1.3.tar.gz
.
File metadata
- Download URL: kds-0.1.3.tar.gz
- Upload date:
- Size: 8.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.5.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.7.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f63a2657e79a365716026139f149202bc58a3014c611f8ff0d2860bbea7a7ca6 |
|
MD5 | d7efb531bac8b0aa0d05fd4041089550 |
|
BLAKE2b-256 | 5f272dfb21f7e4bc2234384899e1ec2a381e780dc5fbef541445c5ecec6e2290 |
File details
Details for the file kds-0.1.3-py3-none-any.whl
.
File metadata
- Download URL: kds-0.1.3-py3-none-any.whl
- Upload date:
- Size: 7.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.5.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.7.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 07786dd8fc005d4cd2c91a435698876b8711a73d5c94d08c6830aec6073c5eca |
|
MD5 | 678b3564b09bdbf91dbf3adc20c155c9 |
|
BLAKE2b-256 | d551b1e111359c1a1af54875c1faea0671cc7d7f425a9790ad35c013d59daf60 |