Skip to main content

A Python package for fast exploration of machine learning pipelines

Project description

ATOM

Automated Tool for Optimized Modelling

A Python package for fast exploration of machine learning pipelines



📜 Overview

Author Mavs      Email m.524687@gmail.com      Documentation Documentation      Slack Slack


General Information
Repository Project Status: Active Conda Recipe License: MIT Downloads
Release PyPI version Conda Version DOI
Compatibility Python 3.8|3.9|3.10 Conda Platforms
Build status Build Status Azure Pipelines codecov
Code analysis PEP8 Imports: isort Language grade: Python Total alerts



💡 Introduction

During the exploration phase of a machine learning project, a data scientist tries to find the optimal pipeline for his specific use case. This usually involves applying standard data cleaning steps, creating or selecting useful features, trying out different models, etc. Testing multiple pipelines requires many lines of code, and writing it all in the same notebook often makes it long and cluttered. On the other hand, using multiple notebooks makes it harder to compare the results and to keep an overview. On top of that, refactoring the code for every test can be quite time-consuming. How many times have you conducted the same action to pre-process a raw dataset? How many times have you copy-and-pasted code from an old repository to re-use it in a new use case?

ATOM is here to help solve these common issues. The package acts as a wrapper of the whole machine learning pipeline, helping the data scientist to rapidly find a good model for his problem. Avoid endless imports and documentation lookups. Avoid rewriting the same code over and over again. With just a few lines of code, it's now possible to perform basic data cleaning steps, select relevant features and compare the performance of multiple models on a given dataset, providing quick insights on which pipeline performs best for the task at hand.

Example steps taken by ATOM's pipeline:

  1. Data Cleaning
    • Handle missing values
    • Encode categorical features
    • Detect and remove outliers
    • Balance the training set
  2. Feature engineering
    • Create new non-linear features
    • Select the most promising features
  3. Train and validate multiple models
    • Apply hyperparameter tuning
    • Fit the models on the training set
    • Evaluate the results on the test set
  4. Analyze the results
    • Get the scores on various metrics
    • Make plots to compare the model performances



diagram_pipeline



🛠️ Installation

Install ATOM's newest release easily via pip:

$ pip install -U atom-ml

or via conda:

$ conda install -c conda-forge atom-ml



⚡ Usage

SageMaker Studio Lab Binder

ATOM contains a variety of classes and functions to perform data cleaning, feature engineering, model training, plotting and much more. The easiest way to use everything ATOM has to offer is through one of the main classes:

Let's walk you through an example. Click on the SageMaker Studio Lab badge on top of this section to run this example yourself.

Make the necessary imports and load the data.

import pandas as pd
from atom import ATOMClassifier

# Load the Australian Weather dataset
X = pd.read_csv("https://raw.githubusercontent.com/tvdboom/ATOM/master/examples/datasets/weatherAUS.csv")
X.head()

Initialize the ATOMClassifier or ATOMRegressor class. These two classes are convenient wrappers for the whole machine learning pipeline. Contrary to sklearn's API, they are initialized providing the data you want to manipulate.

atom = ATOMClassifier(X, y="RainTomorrow", test_size=0.3, verbose=2)

Data transformations are applied through atom's methods. For example, calling the impute method will initialize an Imputer instance, fit it on the training set and transform the whole dataset. The transformations are applied immediately after calling the method (no fit and transform commands necessary).

atom.impute(strat_num="median", strat_cat="most_frequent")  
atom.encode(strategy="LeaveOneOut", max_onehot=8)

Similarly, models are trained and evaluated using the run method. Here, we fit both a Random Forest and AdaBoost model, and apply hyperparameter tuning.

atom.run(models=["RF", "AdaB"], metric="auc", n_trials=10)

Lastly, visualize the result using the integrated plots.

atom.plot_roc()
atom.rf.plot_confusion_matrix(normalize=True)



Documentation Documentation

Relevant links
About Learn more about the package.
🚀 Getting started New to ATOM? Here's how to get you started!
📢 Release history What are the new features of the latest release?
👨‍💻 User guide How to use ATOM and its features.
🎛️ API Reference The detailed reference for ATOM's API.
📋 Examples Example notebooks show you what can be done and how.
FAQ Get answers to frequently asked questions.
🔧 Contributing Do you wan to contribute to the project? Read this before creating a PR.
🌳 Dependencies Which other packages does ATOM depend on?
📃 License Copyright and permissions under the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

atom-ml-5.0.0.tar.gz (269.5 kB view hashes)

Uploaded Source

Built Distribution

atom_ml-5.0.0-py3-none-any.whl (218.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page