Skip to main content

A PyTorch-based deep learning classifier training framework.

Project description

Classifier-trains

For project details, please see README.md

Install as a package

pip3 install classifier-trains

Makefile Commands (Only when cloned from GitHub)

# Check all available commands
make help

After installation

For example of pipeline configuration, please see pipeline_config_only_train.yml, pipeline_config_only_eval.yml, full_pipeline_config.yml.

# Run training or evaluation
python3 -m classifier_trains run --config <path to yml config> --output_dir <output_dir>

# Run training or evaluation with profiling, which will generate a profile report
python3 -m classifier_trains profile --config <path to yml config> --output_dir <output_dir>

# Compute mean and std of dataset
python3 -m classifier_trains compute-mean-and-std --dir-path <path to dataset>

# Get output mapping of dataset
python3 -m classifier_trains get-output-mapping --dir-path <path to dataset>

Expected Folder Structure for Dataset

Dataset Directory
├── train
│   ├── <class1>
│   │   ├── <image1>
│   │   └── ...
│   └── <class2>
│       ├── <image1>
│       └── ...
├── val
│   ├── <class1>
│   │   ├── <image1>
│   │   └── ...
│   └── <class2>
│       ├── <image1>
│       └── ...
└── eval
    ├── <class1>
    │   ├── <image1>
    │   └── ...
    └── <class2>
        ├── <image1>
        └── ...

Pipeline Parameter Table

Pipeline Parameters

Parameter Description Type Default Choices
enable_training Enable training bool False True, False
enable_evaluation Enable evaluation bool False True, False

Model Parameters

Parameter Description Type Default Choices
model Model architecture to use str / resnet18, resnet34, resnet50, resnet152, vgg11, vgg11_bn, vgg13, vgg13_bn, vgg16, vgg16_bn, vgg19, vgg19_bn, squeezenet1_0, squeezenet1_1, densenet121, densenet161, densenet169, densenet201, inception_v3
num_classes Number of classes int / Any positive integer
weights Pretrained weights str DEFAULT Check PyTorch Documentation
checkpoint_path Path to checkpoint Optional[str] None Self defined path

Dataloader Parameters

Parameter Description Type Default Choices
batch_size Batch size int / Any positive integer
num_workers Number of workers int 0 Any non negative integer

Resize Parameters

Parameter Description Type Default Choices
width Width of resized image int / Any positive integer
height Height of resized image int / Any positive integer
interpolation Interpolation method str bicubic nearest, nearest_exact, bilinear, bicubic
padding Padding Optional[str] None top_left, top_right, bottom_left, bottom_right, center
maintain_aspect_ratio Maintain aspect ratio bool False True, False

Spatial Transform Parameters

Parameter Description Type Default Choices
hflip_prob Horizontal flip probability float / Any float between 0 and 1
vflip_prob Vertical flip probability float / Any float between 0 and 1
max_rotate_in_degree Maximum rotation in degree float 0.0 Any non negative float
allow_center_crop Allow center crop bool False True, False
allow_random_crop Allow random crop bool False True, False

Color Transform Parameters

Parameter Description Type Default Choices
allow_gray_scale Allow gray scale bool False True, False
allow_random_color Allow random color bool False True, False

Preprocessing Parameters

Parameter Description Type Default Choices
mean Mean of dataset List[float] / Any list of floats
std Standard deviation of dataset List[float] / Any list of floats

Optimizer Parameters

Parameter Description Type Default Choices
name Optimizer to use str / sgd, adam, rmsprop, adamw
lr Learning rate float / Any positive float
weight_decay Weight decay float 0.0 Any non negative float
momentum Momentum Optional[float] None Any non negative float
alpha Alpha Optional[float] None Any non negative float
betas Betas Optional[Tuple[float, float]] None Any tuple of non negative floats

Scheduler Parameters

Parameter Description Type Default Choices
name Scheduler to use str / step, consine
lr_min Minimum learning rate Optional[float] None Any non negative float
step_size Step size Optional[int] None Any positive integer
gamma Gamma Optional[float] None Any non negative float

Training Parameters

Parameter Description Type Default Choices
name Name of the experiment str / Any string
num_epochs Number of epochs int / Any positive integer
trainset_dir Path to training dataset str / Any string
valset_dir Path to validation dataset str / Any string
testset_dir Path to testing dataset Optional[str] None Any string
device Device to use str cuda cpu, cuda, etc.
max_num_hrs Maximum number of hours to run Optional[float] None Any non negative float
criterion What criterion to use for best model str loss loss, accuracy
validate_every Validate every n epochs int 1 Any positive integer
save_every Save model every n epochs int 3 Any positive integer
patience Patience for early stopping int 5 Any positive integer
random_seed Random seed int 42 Any positive integer
precision Precision int 64 16, 32, 64
export_last_as_onnx Export last model as ONNX bool False True, False
export_best_as_onnx Export best model as ONNX bool False True, False

Evaluation Parameters

Parameter Description Type Default Choices
name Name of the experiment str / Any string
evalset_dir Path to evaluation dataset str / Any string
device Device to use str cuda cpu, cuda, etc.
precision Precision int 64 16, 32, 64
random_seed Random seed int 42 Any positive integer

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

classifier_trains-1.1.1.tar.gz (19.4 kB view details)

Uploaded Source

Built Distribution

classifier_trains-1.1.1-py3-none-any.whl (24.8 kB view details)

Uploaded Python 3

File details

Details for the file classifier_trains-1.1.1.tar.gz.

File metadata

  • Download URL: classifier_trains-1.1.1.tar.gz
  • Upload date:
  • Size: 19.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.11.10 Linux/6.5.0-1025-azure

File hashes

Hashes for classifier_trains-1.1.1.tar.gz
Algorithm Hash digest
SHA256 1a3cbac6b871b126995966c101025fb2963506c6ce51aea1fb1831db03520433
MD5 300c912c7f4e479d2a95e533bdaae02b
BLAKE2b-256 f93ec6e77b60ff08849179a2f5b02c432811bf5ae2ac7bfc5c793d42e3662941

See more details on using hashes here.

File details

Details for the file classifier_trains-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: classifier_trains-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 24.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.11.10 Linux/6.5.0-1025-azure

File hashes

Hashes for classifier_trains-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3567013a5f976e3a3026a90f4e80cd2e73a452d067ec9de85feba224d5ccb2e1
MD5 1158a147154dda811af93b766cec05f2
BLAKE2b-256 c6cf7820b271348be4e25fff06936a29fe84d42ef22073c3f2885f5e0476260c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page