Distributed scikit-learn meta-estimators with PySpark
Project description
What is it?
sk-dist is a Python package for machine learning built on top of scikit-learn and is distributed under the Apache 2.0 software license. The sk-dist module can be thought of as “distributed scikit-learn” as its core functionality is to extend the scikit-learn built-in joblib parallelization of meta-estimator training to spark. A popular use case is the parallelization of grid search as shown here:
Check out the blog post for more information on the motivation and use cases of sk-dist.
Main Features
Distributed Training - sk-dist parallelizes the training of scikit-learn meta-estimators with PySpark. This allows distributed training of these estimators without any constraint on the physical resources of any one machine. In all cases, spark artifacts are automatically stripped from the fitted estimator. These estimators can then be pickled and un-pickled for prediction tasks, operating identically at predict time to their scikit-learn counterparts. Supported tasks are:
Grid Search: Hyperparameter optimization techniques, particularly GridSearchCV and RandomizedSeachCV, are distributed such that each parameter set candidate is trained in parallel.
Multiclass Strategies: Multiclass classification strategies, particularly OneVsRestClassifier and OneVsOneClassifier, are distributed such that each binary probelm is trained in parallel.
Tree Ensembles: Decision tree ensembles for classification and regression, particularly RandomForest and ExtraTrees, are distributed such that each tree is trained in parallel.
Distributed Prediction - sk-dist provides a prediction module which builds vectorized UDFs for PySpark DataFrames using fitted scikit-learn estimators. This distributes the predict and predict_proba methods of scikit-learn estimators, enabling large scale prediction with scikit-learn.
Feature Encoding - sk-dist provides a flexible feature encoding utility called Encoderizer which encodes mix-typed feature spaces using either default behavior or user defined customizable settings. It is particularly aimed at text features, but it additionally handles numeric and dictionary type feature spaces.
Installation
Dependencies
sk-dist requires:
Dependency Notes
versions of numpy, scipy and joblib that are compatible with any supported version of scikit-learn should be sufficient for sk-dist
sk-dist is not supported with Python 2
Spark Dependencies
Most sk-dist functionality requires a spark installation as well as PySpark. Some functionality can run without spark, so spark related dependencies are not required. The connection between sk-dist and spark relies solely on a sparkContext as an argument to various sk-dist classes upon instantiation.
A variety of spark configurations and setups will work. It is left up to the user to configure their own spark setup. The testing suite runs spark 2.3 and spark 2.4, though any spark 2.0+ versions are expected to work.
Additional spark related dependecies are pyarrow, which is used only for skdist.predict functions. This uses vectorized pandas UDFs which require pyarrow>=0.8.0, tested with pyarrow==0.15.0. Depending on the spark version, it may be necessary to set spark.conf.set("spark.sql.execution.arrow.enabled", "true") in the spark configuration.
User Installation
The easiest way to install sk-dist is with pip:
pip install --upgrade sk-dist
You can also download the source code:
git clone https://github.com/Ibotta/sk-dist.git
Testing
With pytest installed, you can run tests locally:
pytest sk-dist
Examples
The package contains numerous examples on how to use sk-dist in practice. Examples of note are:
Gradient Boosting
sk-dist has been tested with a number of popular gradient boosting packages that conform to the scikit-learn API. This includes xgboost and catboost. These will need to be installed in addition to sk-dist on all nodes of the spark cluster via a node bootstrap script. Version compatibility is left up to the user.
Support for lightgbm is not guaranteed, as it requires additional installations on all nodes of the spark cluster. This may work given proper installation but has not beed tested with sk-dist.
Background
The project was started at Ibotta Inc. on the machine learning team and open sourced in 2019.
It is currently maintained by the machine learning team at Ibotta. Special thanks to those who contributed to sk-dist while it was initially in development at Ibotta:
Thanks to James Foley for logo artwork.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file sk-dist-0.1.9.tar.gz
.
File metadata
- Download URL: sk-dist-0.1.9.tar.gz
- Upload date:
- Size: 41.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.40.2 CPython/3.7.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fd956610ef88343686c6dcc1e01df617829cdf43444a4d3a41437b214df8a89d |
|
MD5 | 614ca9ddf622dfc6214b1d2d571afbe2 |
|
BLAKE2b-256 | 2ba269d38208de981c980eb8513348a82076b26ee52f2066b898d73252b74b95 |