Skip to main content

Rock classifier deployed on railway and monitored using Weights and Biases!

Project description

Whats-this-rock

This project deploys a telegram bot that classifies rock images into 1 of 7 types.
What's my name?

GitHub Workflow Status GitHub issues GitHub Super-Linter

code-size repo-size top-language

Python Tensorflow

contributions welcome HitCount

This package uses tensorflow to accelerate deep learning experimentation.

MLOps workflow like - Experiment Tracking - Model Management - Hyperparameter Tuning

was all done using Weights & Biases

Additionally, nbdev was used to - develop the package - produce documentation based on a series of notebooks. - CI - publishing to PyPi

Inspiration

The common complaint that you need massive amounts of data to do deep learning  can be a very long way from the truth!

You very often don’t need much data at all, a lot of people are looking for ways to share data and aggregate data, but that’s unnecessary.They assume they need more data than they do, cause they’re not familiar with the basics of transfer learning which is this critical technique for needing orders of magnitudes less data.

Jeremy Howards

Installation & Training Steps

Install

To install, use pip:

pip install git+https://github.com/udaylunawat/Whats-this-rock.git

Use the Telegram Bot

You can try the bot here on Telegram.

Type /help to get instructions in chat.

Deploy Telegram Bot

rocks_deploy_bot

Train Model

Run these commands

rocks_train_model epochs=3

You can try different models and parameters by editing config.json.

By using Hydra it’s now much more easier to override parameters like this

rocks_train_model wandb.project=Whats-this-rockv \
                  dataset_id=[1,2] \
                  epochs=50 \
                  backbone=resnet

result

Wandb Sweeps (Hyperparameter Tuning)

Edit configs/sweeps.yaml

wandb sweep \
--project Whats-this-rock \
--entity udaylunawat \
configs/sweep.yaml

This will return a command with $sweepid

wandb agent udaylunawat/Whats-this-rock/$sweepid

Demo

alt colabRun in Colab alt SourceView Source on GitHub alt noteboookDownload Notebook

Features

\Features added \Features planned
  • Wandb

  • Datasets

    • 4 Datasets
  • Augmentation

    • keras-cv
    • Regular Augmentation
  • Sampling

    • Oversampling
    • Undersampling
    • Class weights
  • Remove Corrupted Images

  • Try Multiple Optimizers (Adam, RMSProp, AdamW, SGD)

  • Generators

    • TFDS datasets
    • ImageDataGenerator
  • Models

    • ConvNextTiny
    • BaselineCNN
    • Efficientnet
    • Resnet101
    • MobileNetv1
    • MobileNetv2
    • Xception
  • LRScheduleer, LRDecay

    • Baseline without scheduler
    • Step decay
    • Cosine annealing
    • Classic cosine annealing with bathc steps w/o restart
  • Model Checkpoint, Resume Training

  • Evaluation

    • Confusion Matrix
    • Classification Report
  • Deploy Telegram Bot

    • Heroku - Deprecated
    • Railway
    • Show CM and CL in bot
  • Docker

  • GitHub Actions

    • Deploy Bot when bot.py is updated.
    • Lint code using GitHub super-linter
  • Configuration Management

    • ml-collections
    • Hydra
  • Performance improvement

    • Convert to tf.data.Dataset
  • Linting & Formatting

    • Black
    • Flake8
    • isort
    • pydocstyle
  • Add Badges

    • Linting
  • found the classes that the model is performing terribly on

  • nbdev

  • CI

  • documentation

  • Deploy to Huggingface spaces

  • Accessing the model through FastAPI (Backend)

  • Streamlit (Frontend)

  • convert models.py to Classes and more OOP style

  • Group Runs

    • kfold cross validation
  • WandB Tables

  • find the long tail examples or hard examples,

  • Add Badges

    • Railway
  • Technologies Used

    Google Colab python-telegram-bot Railway
    Jupyter Notebook Python GitHub Actions
    Weights & Biases TensorFlow macOS
    Docker Git Hydra
    Black

    Directory Tree

    ├── imgs                              <- Images for skill banner, project banner and other images
    │
    ├── configs                           <- Configuration files
    │   ├── configs.yaml                  <- config for single run
    │   └── sweeps.yaml                   <- confguration file for sweeps hyperparameter tuning
    │
    ├── data
    │   ├── corrupted_images              <- corrupted images will be moved to this directory
    │   ├── misclassified_images          <- misclassified images will be moved to this directory
    │   ├── bad_images                    <- Bad images will be moved to this directory
    │   ├── duplicate_images              <- Duplicate images will be moved to this directory
    │   ├── sample_images                 <- Sample images for inference
    │   ├── 0_raw                         <- The original, immutable data dump.
    │   ├── 1_external                    <- Data from third party sources.
    │   ├── 2_interim                     <- Intermediate data that has been transformed.
    │   └── 3_processed                   <- The final, canonical data sets for modeling.
    │
    ├── notebooks                         <- Jupyter notebooks. Naming convention is a number (for ordering),
    │                                        the creator's initials, and a short `-` delimited description, e.g.
    │                                        1.0-jqp-initial-data-exploration`.
    │
    │
    ├── rocks_classifier                  <- Source code for use in this project.
    │   │
    │   ├── data                          <- Scripts to download or generate data
    │   │   ├── download.py
    │   │   ├── preprocess.py
    │   │   └── utils.py
    │   │
    │   ├── callbacks                     <- functions that are executed during training at given stages of the training procedure
    │   │   └── callbacks.py
    │   │
    │   ├── models                        <- Scripts to train models and then use trained models to make
    │   │   │                                predictions
    │   │   ├── evaluate.py
    │   │   ├── models.py
    │   │   ├── predict.py
    │   │   ├── train.py
    │   │   └── utils.py
    │   │
    │   │
    │   └── visualization                 <- Scripts for visualizations
    │
    ├── .dockerignore                     <- Docker ignore
    ├── .gitignore                        <- GitHub's excellent Python .gitignore customized for this project
    ├── LICENSE                           <- Your project's license.
    ├── README.md                         <- The top-level README for developers using this project.
    ├── CHANGELOG.md                      <- Release changes.
    ├── CODE_OF_CONDUCT.md                <- Code of conduct.
    ├── CONTRIBUTING.md                   <- Contributing Guidelines.
    ├── settings.ini                      <- configuration.
    ├── README.md                         <- The top-level README for developers using this project.
    ├── requirements.txt                  <- The requirements file for reproducing the analysis environment, e.g.
    │                                        generated with `pip freeze > requirements.txt`
    └── setup.py                          <- makes project pip installable (pip install -e .) so src can be imported
    

    Bug / Feature Request

    If you find a bug (the site couldn’t handle the query and / or gave undesired results), kindly open an issue here by including your search query and the expected result.

    If you’d like to request a new function, feel free to do so by opening an issue here. Please include sample queries and their corresponding results.

    Contributing

    • Contributions make the open source community such an amazing place to learn, inspire, and create.
    • Any contributions you make are greatly appreciated.
    • Check out our contribution guidelines for more information.

    License

    LinkFree is licensed under the MIT License - see the LICENSE file for details.

    Credits

    Support

    This project needs a ⭐️ from you. Don’t forget to leave a star ⭐️


    Walt might be the one who knocks
    but Hank is the one who rocks.

    Project details


    Download files

    Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

    Source Distribution

    rocks_classifier-0.0.7.tar.gz (32.4 kB view details)

    Uploaded Source

    Built Distribution

    rocks_classifier-0.0.7-py3-none-any.whl (32.6 kB view details)

    Uploaded Python 3

    File details

    Details for the file rocks_classifier-0.0.7.tar.gz.

    File metadata

    • Download URL: rocks_classifier-0.0.7.tar.gz
    • Upload date:
    • Size: 32.4 kB
    • Tags: Source
    • Uploaded using Trusted Publishing? No
    • Uploaded via: twine/4.0.1 CPython/3.10.6

    File hashes

    Hashes for rocks_classifier-0.0.7.tar.gz
    Algorithm Hash digest
    SHA256 e377a458215608992ff7a1fc2b3ffb05ffd18624b6a4ce2b37969fc33e1eebd7
    MD5 581205c8372e6498e696304678fadbe9
    BLAKE2b-256 acc5d8726b722fa5ee3f1c86d3ca2801bf6a19c270c9e265971b5397ad311f67

    See more details on using hashes here.

    File details

    Details for the file rocks_classifier-0.0.7-py3-none-any.whl.

    File metadata

    File hashes

    Hashes for rocks_classifier-0.0.7-py3-none-any.whl
    Algorithm Hash digest
    SHA256 3fb7def23a0a42f327f9bd3347b0cc0860b10d2e71e88d257e511facab901eb8
    MD5 7a87ac393ccf440dbbe7abb608f126c8
    BLAKE2b-256 0e1f3be7450e2431c0f97ff040eafa8590423702b171a8156fc50455389b222a

    See more details on using hashes here.

    Supported by

    AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page