Skip to main content

OBAN Classifier: A Skorch-based flexible neural network for binary and multiclass classification

Project description

OBAN Classifier

Oban Classifier is a flexible neural network-based classifier built on top of PyTorch and Skorch. It supports both binary and multiclass classification, and allows users to define parameters such as the number of units, activation function, dropout rate, and more.

Features

  • Supports binary and multiclass classification.
  • Allows user-defined parameters for hidden units, activation functions, dropout, and more.
  • Built using Skorch and PyTorch for easy integration with scikit-learn pipelines.
  • Provides detailed performance metrics including accuracy, precision, recall, F1-score, and confusion matrix.

Installation

You can install the library via pip after publishing it on PyPI:

pip install oban_classifier


### Usage Example

```python

from oban_classifier import oban_classifier, post_classification_analysis, plot_lime_importance
from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import StandardScaler
import pandas as pd

# Load the Breast Cancer dataset
data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)

# Train and evaluate the model (num_classes is automatically inferred)
netv, X_test, y_test = oban_classifier(X, y, num_units=128, max_epochs=80, lr=0.001)

# Convert X_test to DataFrame with feature names
X_test_df = pd.DataFrame(X_test, columns=X.columns)

# Predict probabilities
y_proba = netv.predict_proba(X_test_df.to_numpy())

# Perform post-classification analysis
post_classification_analysis(X_test_df, y_test, y_proba, threshold=0.5)

# Explain predictions using LIME with correct feature names
plot_lime_importance(netv, X_test_df, y_test, feature_names=X.columns)

# Predict on new data
new_data = pd.DataFrame([[15.0, 20.0, 85.0, 60.0, 0.5, 1.5, 3.0, 0.02, 0.2, 0.3,
                          0.1, 25.0, 50.0, 150.0, 100.0, 0.1, 0.5, 2.5, 0.01, 0.1,
                          15.0, 20.0, 85.0, 60.0, 0.5, 1.5, 3.0, 0.02, 0.2, 0.3]], 
                        columns=X.columns)

# Normalize the new data using the same scaler as before
scaler = StandardScaler()
scaler.fit(X)
new_data_scaled = scaler.transform(new_data)

# Predict the class for the new data
predicted_class = netv.predict(new_data_scaled)
print(f"Predicted class: {predicted_class}")

# Predict probabilities for the new data
predicted_probabilities = netv.predict_proba(new_data_scaled)
print(f"Predicted probabilities: {predicted_probabilities}")




#### oban_classifier Parameters

X (pd.DataFrame): The feature matrix. Should be a Pandas DataFrame where each row is an instance and each column is a feature.

y (pd.Series): The target variable. Should be a Pandas Series where each value corresponds to the target class of a given row in X.

num_units (int, optional, default=128): The number of hidden units in the dense layers of the neural network.

nonlin (torch.nn.Module, optional, default=nn.ReLU()): The non-linear activation function to apply after each dense layer. Default is ReLU, but can be changed to other functions like nn.Sigmoid() or nn.Tanh().

dropout_rate (float, optional, default=0.5): The dropout rate applied to the layers to prevent overfitting. Should be between 0 and 1.

max_epochs (int, optional, default=10): The maximum number of epochs to train the model.

lr (float, optional, default=0.01): The learning rate for the optimizer.

test_size (float, optional, default=0.2): The proportion of the dataset to be used for testing. Should be between 0 and 1.

random_state (int, optional, default=42): The seed for the random number generator to ensure reproducible results during dataset splitting.


#### post_classification_analysis Parameters

X (pd.DataFrame): The feature matrix used during testing.

y_true (pd.Series): The true class labels for the test set.

y_proba (np.ndarray): The predicted probabilities for each class.

threshold (float, optional, default=0.5): The decision threshold used for binary classification. Predictions with probabilities greater than or equal to the threshold are classified as 1, otherwise as 0. This parameter is ignored in multiclass classification.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

oban_classifier-0.1.19.10.tar.gz (5.1 kB view details)

Uploaded Source

Built Distribution

oban_classifier-0.1.19.10-py3-none-any.whl (5.7 kB view details)

Uploaded Python 3

File details

Details for the file oban_classifier-0.1.19.10.tar.gz.

File metadata

  • Download URL: oban_classifier-0.1.19.10.tar.gz
  • Upload date:
  • Size: 5.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for oban_classifier-0.1.19.10.tar.gz
Algorithm Hash digest
SHA256 539674784af823bc8150b6aae417ca4bc798f073eb08fb0cf8c067c878585644
MD5 71d2472b393dd7e9894b3019f9c6b04a
BLAKE2b-256 3e51422206210023c68c5b0911ae796220c1b15d15e055abd35285e9b80337b4

See more details on using hashes here.

File details

Details for the file oban_classifier-0.1.19.10-py3-none-any.whl.

File metadata

File hashes

Hashes for oban_classifier-0.1.19.10-py3-none-any.whl
Algorithm Hash digest
SHA256 20a1b0d76d451910745330b0f69ef565e8a2988dd3705519920ec0d6ad8855dc
MD5 7ccde0d2fcfece3c7f42e27b566b4a41
BLAKE2b-256 6fbaed6ebf578f1526d59aba1377f85e0256702098025fb6fe32e3b8b4d6b33a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page