The supervised learning framework based on perceptron for tabular data.
Project description
perming
perming: Perceptron Models Are Training on Windows Platform with Default GPU Acceleration.
- p: use polars or pandas to read dataset.
- per: perceptron algorithm used as based model.
- m: models concluding regressier and classifier (binary & multiple).
- ing: training on windows platform with strong gpu acceleration.
init backend
refer to https://pytorch.org/get-started/locally/ and choose the PyTorch that support cuda
compatible with your Windows. The current software version only supports Windows system.
test with: PyTorch 1.7.1+cu101
general model
GENERAL_BOX(Box) | Parameters | Meaning |
---|---|---|
__init__ |
input_: int num_classes: int hidden_layer_sizes: Tuple[int]=(100,) device: str="cuda" * activation: str="relu" inplace_on: bool=True criterion: str="CrossEntropyLoss" solver: str="adam" batch_size: int=32 learning_rate_init: float=1e-2 lr_scheduler: Optional[str]=None |
Initialize Classifier or Regressier Based on Basic Information of the Dataset Obtained through Data Preprocessing and Feature Engineering. |
print_config | / | Return Initialized Parameters of Multi-layer Perceptron and Graph. |
data_loader | features: TabularData labels: TabularData ratio_set: Dict[str, int]={'train': 8, 'test': 1, 'val': 1} worker_set: Dict[str, int]={'train': 8, 'test': 2, 'val': 1} random_seed: Optional[int]=None |
Using ratio_set and worker_set to Load the Numpy Dataset into torch.utils.data.DataLoader . |
train_val | num_epochs: int=2 tolerance: float=1e-3 patience: int=10 interval: int=100 backend: str="threading" n_jobs: int=-1 early_stop: bool=False |
Using num_epochs , tolerance , patience to Control Training Process and interval to Adjust Print Interval with Accelerated Validation Combined with backend and n_jobs . |
test | sort_by: str="accuracy" sort_state: bool=True |
Sort Returned Test Result about Correct Classes with sort_by and sort_state Which Only Appears in Classification. |
save | show: bool=True dir: str='./model' |
Save Trained Model Parameters with Model state_dict Control by show . |
load | show: bool=True dir: str='./model' |
Load Trained Model Parameters with Model state_dict Control by show . |
common models (cuda first)
- Regression
Regressier | Parameters | Meaning |
---|---|---|
__init__ |
input_: int hidden_layer_sizes: Tuple[int]=(100,)* activation: str="relu" criterion: str="MSELoss" solver: str="adam" batch_size: int=32 learning_rate_init: float=1e-2 lr_scheduler: Optional[str]=None |
Initialize Regressier Based on Basic Information of the Regression Dataset Obtained through Data Preprocessing and Feature Engineering with num_classes=1 . |
print_config | / | Return Initialized Parameters of Multi-layer Perceptron and Graph. |
data_loader | features: TabularData labels: TabularData ratio_set: Dict[str, int]={'train': 8, 'test': 1, 'val': 1} worker_set: Dict[str, int]={'train': 8, 'test': 2, 'val': 1} random_seed: Optional[int]=None |
Using ratio_set and worker_set to Load the Regression Dataset with Numpy format into torch.utils.data.DataLoader . |
train_val | num_epochs: int=2 interval: int=100 tolerance: float=1e-3 patience: int=10 backend: str="threading" n_jobs: int=-1 early_stop: bool=False |
Using num_epochs , tolerance , patience to Control Training Process and interval to Adjust Print Interval with Accelerated Validation Combined with backend and n_jobs . |
test | / | Test Module Only Show with Loss at 3 Stages: Train, Test, Val |
save | show: bool=True dir: str='./model' |
Save Trained Model Parameters with Model state_dict Control by show . |
load | show: bool=True dir: str='./model' |
Load Trained Model Parameters with Model state_dict Control by show . |
- Binary-classification
Binarier | Parameters | Meaning |
---|---|---|
__init__ |
input_: int hidden_layer_sizes: Tuple[int]=(100,)* activation: str="relu" criterion: str="BCELoss" solver: str="adam" batch_size: int=32 learning_rate_init: float=1e-2 lr_scheduler: Optional[str]=None |
Initialize Classifier Based on Basic Information of the Classification Dataset Obtained through Data Preprocessing and Feature Engineering with num_classes=2 . |
print_config | / | Return Initialized Parameters of Multi-layer Perceptron and Graph. |
data_loader | features: TabularData labels: TabularData ratio_set: Dict[str, int]={'train': 8, 'test': 1, 'val': 1} worker_set: Dict[str, int]={'train': 8, 'test': 2, 'val': 1} random_seed: Optional[int]=None |
Using ratio_set and worker_set to Load the Binary-classification Dataset with Numpy format into torch.utils.data.DataLoader . |
train_val | num_epochs: int=2 interval: int=100 tolerance: float=1e-3 patience: int=10 backend: str="threading" n_jobs: int=-1 early_stop: bool=False |
Using num_epochs , tolerance , patience to Control Training Process and interval to Adjust Print Interval with Accelerated Validation Combined with backend and n_jobs . |
test | sort_by: str="accuracy" sort_state: bool=True |
Test Module Show with Correct Class and Loss at 3 Stages: Train, Test, Val |
save | show: bool=True dir: str='./model' |
Save Trained Model Parameters with Model state_dict Control by show . |
load | show: bool=True dir: str='./model' |
Load Trained Model Parameters with Model state_dict Control by show . |
- Multi-classification
Multipler | Parameters | Meaning |
---|---|---|
__init__ |
input_: int num_classes: int hidden_layer_sizes: Tuple[int]=(100,)* activation: str="relu" criterion: str="CrossEntropyLoss" solver: str="adam" batch_size: int=32 learning_rate_init: float=1e-2 lr_scheduler: Optional[str]=None |
Initialize Classifier Based on Basic Information of the Classification Dataset Obtained through Data Preprocessing and Feature Engineering with num_classes>2 . |
print_config | / | Return Initialized Parameters of Multi-layer Perceptron and Graph. |
data_loader | features: TabularData labels: TabularData ratio_set: Dict[str, int]={'train': 8, 'test': 1, 'val': 1} worker_set: Dict[str, int]={'train': 8, 'test': 2, 'val': 1} random_seed: Optional[int]=None |
Using ratio_set and worker_set to Load the Multi-classification Dataset with Numpy format into torch.utils.data.DataLoader . |
train_val | num_epochs: int=2 interval: int=100 tolerance: float=1e-3 patience: int=10 backend: str="threading" n_jobs: int=-1 early_stop: bool=False |
Using num_epochs , tolerance , patience to Control Training Process and interval to Adjust Print Interval with Accelerated Validation Combined with backend and n_jobs . |
test | sort_by: str="accuracy" sort_state: bool=True |
Sort Returned Test Result about Correct Classes with sort_by and sort_state Which Only Appears in Classification. |
save | show: bool=True dir: str='./model' |
Save Trained Model Parameters with Model state_dict Control by show . |
load | show: bool=True dir: str='./model' |
Load Trained Model Parameters with Model state_dict Control by show . |
- Multi-outputs
Ranker | Parameters | Meaning |
---|---|---|
__init__ |
input_: int num_outputs: int hidden_layer_sizes: Tuple[int]=(100,)* activation: str="relu" criterion: str="MultiLabelSoftMarginLoss" solver: str="adam" batch_size: int=32 learning_rate_init: float=1e-2 lr_scheduler: Optional[str]=None |
Initialize Ranker Based on Basic Information of the Classification Dataset Obtained through Data Preprocessing and Feature Engineering with (n_samples, n_outputs). |
print_config | / | Return Initialized Parameters of Multi-layer Perceptron and Graph. |
data_loader | features: TabularData labels: TabularData ratio_set: Dict[str, int]={'train': 8, 'test': 1, 'val': 1} worker_set: Dict[str, int]={'train': 8, 'test': 2, 'val': 1} random_seed: Optional[int]=None |
Using ratio_set and worker_set to Load the Multi-outputs Dataset with Numpy format into torch.utils.data.DataLoader . |
train_val | num_epochs: int=2 interval: int=100 tolerance: float=1e-3 patience: int=10 backend: str="threading" n_jobs: int=-1 early_stop: bool=False |
Using num_epochs , tolerance , patience to Control Training Process and interval to Adjust Print Interval with Accelerated Validation Combined with backend and n_jobs . |
test | sort_by: str="accuracy" sort_state: bool=True |
Sort Returned Test Result about Correct Classes with sort_by and sort_state Which Only Appears in Classification. |
save | show: bool=True dir: str='./model' |
Save Trained Model Parameters with Model state_dict Control by show . |
load | show: bool=True dir: str='./model' |
Load Trained Model Parameters with Model state_dict Control by show . |
prefer replace shape (n,1) with shape (n,) using
numpy.squeeze(input_matrix)
pip install
download latest version:
git clone https://github.com/linjing-lab/easy-pytorch.git
cd easy-pytorch/released_box
pip install -e . --verbose
download stable version:
pip install perming --upgrade
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
perming-1.4.1-py3-none-any.whl
(13.8 kB
view details)
File details
Details for the file perming-1.4.1-py3-none-any.whl
.
File metadata
- Download URL: perming-1.4.1-py3-none-any.whl
- Upload date:
- Size: 13.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/6.0.0 pkginfo/1.8.2 requests/2.28.2 requests-toolbelt/0.9.1 tqdm/4.64.1 CPython/3.8.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 238209a67fb33ecb194f09af56b48f740f6e8c6899b9e91e1d9814a88da5ed0a |
|
MD5 | c56c5602381f0cb732afd60151b6c26a |
|
BLAKE2b-256 | f1104eca7ff0fdc68472ec9b84f1b9c49f662d2f35d97c9379c8c561abf87834 |