Skip to main content

Automated machine learning for production and analytics

Project description

auto\_ml
========

Automated machine learning for production and analytics

|Build Status| |Documentation Status| |PyPI version| |Coverage Status|
|license|

Installation
------------

- ``pip install auto_ml``

Getting started
---------------

.. code:: python

from auto_ml import Predictor
from auto_ml.utils import get_boston_dataset

df_train, df_test = get_boston_dataset()

column_descriptions = {
'MEDV': 'output'
, 'CHAS': 'categorical'
}

ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)

ml_predictor.train(df_train)

ml_predictor.score(df_test, df_test.MEDV)

Show off some more features!
----------------------------

auto\_ml is designed for production. Here's an example that includes
serializing and loading the trained model, then getting predictions on
single dictionaries, roughly the process you'd likely follow to deploy
the trained model.

.. code:: python

from auto_ml import Predictor
from auto_ml.utils import get_boston_dataset
from auto_ml.utils_models import load_ml_model

# Load data
df_train, df_test = get_boston_dataset()

# Tell auto_ml which column is 'output'
# Also note columns that aren't purely numerical
# Examples include ['nlp', 'date', 'categorical', 'ignore']
column_descriptions = {
'MEDV': 'output'
, 'CHAS': 'categorical'
}

ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)

ml_predictor.train(df_train)

# Score the model on test data
test_score = ml_predictor.score(df_test, df_test.MEDV)

# auto_ml is specifically tuned for running in production
# It can get predictions on an individual row (passed in as a dictionary)
# A single prediction like this takes ~1 millisecond
# Here we will demonstrate saving the trained model, and loading it again
file_name = ml_predictor.save()

trained_model = load_ml_model(file_name)

# .predict and .predict_proba take in either:
# A pandas DataFrame
# A list of dictionaries
# A single dictionary (optimized for speed in production evironments)
predictions = trained_model.predict(df_test)
print(predictions)

XGBoost, Deep Learning with TensorFlow & Keras, and LightGBM
------------------------------------------------------------

auto\_ml has all three of these awesome libraries integrated! Generally,
just pass one of them in for model\_names.
``ml_predictor.train(data, model_names=['DeepLearningClassifier'])``

Available options are - ``DeepLearningClassifier`` and
``DeepLearningRegressor`` - ``XGBClassifier`` and ``XGBRegressor`` -
``LGBMClassifer`` and ``LGBMRegressor``

All of these projects are ready for production. These projects all have
prediction time in the 1 millisecond range for a single prediction, and
are able to be serialized to disk and loaded into a new environment
after training.

Depending on your machine, they can occasionally be difficult to
install, so they are not included in auto\_ml's default installation.
You are responsible for installing them yourself. auto\_ml will run fine
without them installed (we check what's isntalled before choosing which
algorithm to use). If you want to try the easy install, just
``pip install -r advanced_requirements.txt``, which will install
TensorFlow, Keras, and XGBoost. LightGBM is not available as a pip
install currently.

Classification
--------------

Binary and multiclass classification are both supported. Note that for
now, labels must be integers (0 and 1 for binary classification).
auto\_ml will automatically detect if it is a binary or multiclass
classification problem- you just have to pass in
``ml_predictor = Predictor(type_of_estimator='classifier', column_descriptions=column_descriptions)``

Feature Learning
----------------

Also known as "finally found a way to make this deep learning stuff
useful for my business". Deep Learning is great at learning important
features from your data. But the way it turns these learned features
into a final prediction is relatively basic. Gradient boosting is great
at turning features into accurate predictions, but it doesn't do any
feature learning.

In auto\_ml, you can now automatically use both types of models for what
they're great at. If you pass
``feature_learning=True, fl_data=some_dataframe`` to ``.train()``, we
will do exactly that: train a deep learning model on your ``fl_data``.
We won't ask it for predictions (standard stacking approach), instead,
we'll use it's penultimate layer to get it's 10 most useful features.
Then we'll train a gradient boosted model (or any other model of your
choice) on those features plus all the original features.

Across some problems, we've witnessed this lead to a 5% gain in
accuracy, while still making predictions in 1-4 milliseconds, depending
on model complexity.

``ml_predictor.train(df_train, feature_learning=True, fl_data=df_fl_data)``

This feature only supports regression and binary classification
currently. The rest of auto\_ml supports multiclass classification.

Categorical Ensembling
----------------------

Ever wanted to train one market for every store/customer, but didn't
want to maintain hundreds of thousands of independent models? With
``ml_predictor.train_categorical_ensemble()``, we will handle that for
you. You'll still have just one consistent API,
``ml_predictor.predict(data)``, but behind this single API will be one
model for each category you included in your training data.

Just tell us which column holds the category you want to split on, and
we'll handle the rest. As always, saving the model, loading it in a
different environment, and getting speedy predictions live in production
is baked right in.

``ml_predictor.train_categorical_ensemble(df_train, categorical_column='store_name')``

More details available in the docs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

http://auto-ml.readthedocs.io/en/latest/

Advice
~~~~~~

Before you go any further, try running the code. Load up some data
(either a DataFrame, or a list of dictionaries, where each dictionary is
a row of data). Make a ``column_descriptions`` dictionary that tells us
which attribute name in each row represents the value we're trying to
predict. Pass all that into ``auto_ml``, and see what happens!

Everything else in these docs assumes you have done at least the above.
Start there and everything else will build on top. But this part gets
you the output you're probably interested in, without unnecessary
complexity.

Docs
----

The full docs are available at https://auto\_ml.readthedocs.io Again
though, I'd strongly recommend running this on an actual dataset before
referencing the docs any futher.

What this project does
----------------------

Automates the whole machine learning process, making it super easy to
use for both analytics, and getting real-time predictions in production.

A quick overview of buzzwords, this project automates:

- Analytics (pass in data, and auto\_ml will tell you the relationship
of each variable to what it is you're trying to predict).
- Feature Engineering (particularly around dates, and NLP).
- Robust Scaling (turning all values into their scaled versions between
the range of 0 and 1, in a way that is robust to outliers, and works
with sparse data).
- Feature Selection (picking only the features that actually prove
useful).
- Data formatting (turning a DataFrame or a list of dictionaries into a
sparse matrix, one-hot encoding categorical variables, taking the
natural log of y for regression problems, etc).
- Model Selection (which model works best for your problem- we try
roughly a dozen apiece for classification and regression problems,
including favorites like XGBoost if it's installed on your machine).
- Hyperparameter Optimization (what hyperparameters work best for that
model).
- Big Data (feed it lots of data- it's fairly efficient with
resources).
- Unicorns (you could conceivably train it to predict what is a unicorn
and what is not).
- Ice Cream (mmm, tasty...).
- Hugs (this makes it much easier to do your job, hopefully leaving you
more time to hug those those you care about).

Running the tests
~~~~~~~~~~~~~~~~~

If you've cloned the source code and are making any changes (highly
encouraged!), or just want to make sure everything works in your
environment, run ``nosetests -v tests``.

CI is also set up, so if you're developing on this, you can just open a
PR, and the tests will run automatically on Travis-CI.

The tests are relatively comprehensive, though as with everything with
auto\_ml, I happily welcome your contributions here!

.. |Build Status| image:: https://travis-ci.org/ClimbsRocks/auto_ml.svg?branch=master
:target: https://travis-ci.org/ClimbsRocks/auto_ml
.. |Documentation Status| image:: http://readthedocs.org/projects/auto-ml/badge/?version=latest
:target: http://auto-ml.readthedocs.io/en/latest/?badge=latest
.. |PyPI version| image:: https://badge.fury.io/py/auto_ml.svg
:target: https://badge.fury.io/py/auto_ml
.. |Coverage Status| image:: https://coveralls.io/repos/github/ClimbsRocks/auto_ml/badge.svg?branch=master&cacheBuster=1
:target: https://coveralls.io/github/ClimbsRocks/auto_ml?branch=master
.. |license| image:: https://img.shields.io/github/license/mashape/apistatus.svg
:target:

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

auto_ml-2.1.7.tar.gz (53.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

auto_ml-2.1.7-py2.py3-none-any.whl (52.9 kB view details)

Uploaded Python 2Python 3

File details

Details for the file auto_ml-2.1.7.tar.gz.

File metadata

  • Download URL: auto_ml-2.1.7.tar.gz
  • Upload date:
  • Size: 53.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for auto_ml-2.1.7.tar.gz
Algorithm Hash digest
SHA256 328757608b1375900c5945e955c610bc227ecbeb8868e3951899106dce30aa76
MD5 54a67352643cff4d1a64d4c9648fd79e
BLAKE2b-256 1b972152ce3a32a2516c1a034ec22d365392dad27e0619202c15a750e70d41bb

See more details on using hashes here.

File details

Details for the file auto_ml-2.1.7-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for auto_ml-2.1.7-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 52c295f48b1e1397d424b85dbdf5eef9b83402cf360c15555c8a15f44b5bb923
MD5 e5444a2403e18c4705d611086444ab91
BLAKE2b-256 8805649339ff87d0901c2d4f04b9cae5acf9dcdb03f14a9f6e6881ae9fad79d6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page