Skip to main content

OBAN Classifier: A Skorch-based flexible neural network for binary and multiclass classification

Project description

OBAN Classifier

Oban Classifier is a flexible neural network-based classifier built on top of PyTorch and Skorch. It supports both binary and multiclass classification, and allows users to define parameters such as the number of units, activation function, dropout rate, and more.

Features

  • Supports binary and multiclass classification.
  • Allows user-defined parameters for hidden units, activation functions, dropout, and more.
  • Built using Skorch and PyTorch for easy integration with scikit-learn pipelines.
  • Provides detailed performance metrics including accuracy, precision, recall, F1-score, and confusion matrix.

Installation

You can install the library via pip after publishing it on PyPI:

pip install oban_classifier


### Usage Example

```python

from oban_classifier import oban_classifier, post_classification_analysis, plot_lime_importance
from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import StandardScaler
import pandas as pd

# Load the Breast Cancer dataset
data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)

# Train and evaluate the model (num_classes is automatically inferred)
netv, X_test, y_test = oban_classifier(X, y, num_units=128, max_epochs=80, lr=0.001)

# Convert X_test to DataFrame with feature names
X_test_df = pd.DataFrame(X_test, columns=X.columns)

# Predict probabilities
y_proba = netv.predict_proba(X_test_df.to_numpy())

# Perform post-classification analysis
post_classification_analysis(X_test_df, y_test, y_proba, threshold=0.5)

# Explain predictions using LIME with correct feature names
plot_lime_importance(netv, X_test_df, y_test, feature_names=X.columns)

# Predict on new data
new_data = pd.DataFrame([[15.0, 20.0, 85.0, 60.0, 0.5, 1.5, 3.0, 0.02, 0.2, 0.3,
                          0.1, 25.0, 50.0, 150.0, 100.0, 0.1, 0.5, 2.5, 0.01, 0.1,
                          15.0, 20.0, 85.0, 60.0, 0.5, 1.5, 3.0, 0.02, 0.2, 0.3]], 
                        columns=X.columns)

# Normalize the new data using the same scaler as before
scaler = StandardScaler()
scaler.fit(X)
new_data_scaled = scaler.transform(new_data)

# Predict the class for the new data
predicted_class = netv.predict(new_data_scaled)
print(f"Predicted class: {predicted_class}")

# Predict probabilities for the new data
predicted_probabilities = netv.predict_proba(new_data_scaled)
print(f"Predicted probabilities: {predicted_probabilities}")




#### oban_classifier Parameters

X (pd.DataFrame): The feature matrix. Should be a Pandas DataFrame where each row is an instance and each column is a feature.

y (pd.Series): The target variable. Should be a Pandas Series where each value corresponds to the target class of a given row in X.

num_units (int, optional, default=128): The number of hidden units in the dense layers of the neural network.

nonlin (torch.nn.Module, optional, default=nn.ReLU()): The non-linear activation function to apply after each dense layer. Default is ReLU, but can be changed to other functions like nn.Sigmoid() or nn.Tanh().

dropout_rate (float, optional, default=0.5): The dropout rate applied to the layers to prevent overfitting. Should be between 0 and 1.

max_epochs (int, optional, default=10): The maximum number of epochs to train the model.

lr (float, optional, default=0.01): The learning rate for the optimizer.

test_size (float, optional, default=0.2): The proportion of the dataset to be used for testing. Should be between 0 and 1.

random_state (int, optional, default=42): The seed for the random number generator to ensure reproducible results during dataset splitting.


#### post_classification_analysis Parameters

X (pd.DataFrame): The feature matrix used during testing.

y_true (pd.Series): The true class labels for the test set.

y_proba (np.ndarray): The predicted probabilities for each class.

threshold (float, optional, default=0.5): The decision threshold used for binary classification. Predictions with probabilities greater than or equal to the threshold are classified as 1, otherwise as 0. This parameter is ignored in multiclass classification.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

oban_classifier-0.1.19.12.tar.gz (5.7 kB view details)

Uploaded Source

Built Distribution

oban_classifier-0.1.19.12-py3-none-any.whl (6.3 kB view details)

Uploaded Python 3

File details

Details for the file oban_classifier-0.1.19.12.tar.gz.

File metadata

  • Download URL: oban_classifier-0.1.19.12.tar.gz
  • Upload date:
  • Size: 5.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for oban_classifier-0.1.19.12.tar.gz
Algorithm Hash digest
SHA256 69496f74ca000dedeeef39dc08e33610ef2736c9da1cdebe67c253ed6ad2f3d9
MD5 78dab38373cbfe1972b819987068ab97
BLAKE2b-256 121ad755f10d233784b8d50fd0b2d1c9000abf4f24f438a3b77a56365a0805e5

See more details on using hashes here.

File details

Details for the file oban_classifier-0.1.19.12-py3-none-any.whl.

File metadata

File hashes

Hashes for oban_classifier-0.1.19.12-py3-none-any.whl
Algorithm Hash digest
SHA256 53324021e359a87fe8512c4f2e0aaa758d5a27518faecd4342fa8186257c219c
MD5 3fb5551bbebe29d119011eb2a79a27e2
BLAKE2b-256 90f5d62a19ce9400e0d360acf7815cc489286d645e510222d68dcbb2b2139382

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page