Skip to main content

Spark acceleration for Scikit-Learn cross validation techniques

Project description

Spark acceleration for Scikit-Learn

This project is a major re-write of the spark-sklearn project, which seems to no longer be under development. It focuses specifically on the acceleration of Scikit-Learn's cross validation functionality using PySpark.

Improvements over spark-sklearn

The functionality is based on sklearn.model_selection module rather than the deprecated and soon to be removed sklearn.grid_search. The new versions contain several nicer features and scikit-spark maintains full compatibility.

Installation

The package can be installed through pip:

pip install scikit-spark

It has so far only been tested with Spark 2.2.0 and up, but may work with older versions.

Usage

The functionality here is meant to as closely resemble using Scikit-Learn as possible. By default (with spark=True) the SparkSession is obtained internally by calling SparkSession.builder.getOrCreate(), so the instantiation and calling of the functions is the same (You will preferably have already created a SparkSession).

This example is adapted from the Scikit-Learn documentation. It instantiates a local SparkSession, and distributes the cross validation folds and iterations using this. In actual use, to get the benefit of this package it should be used distributed across several machines with Spark as running it locally is slower than the Scikit-Learn parallelisation implementation.

from sklearn import svm, datasets
from pyspark.sql import SparkSession

iris = datasets.load_iris()
parameters = {'kernel':('linear', 'rbf'), 'C':[0.01, 0.1, 1, 10, 100]}
svc = svm.SVC()

spark = SparkSession.builder\
    .master("local[*]")\
    .appName("skspark-grid-search-doctests")\
    .getOrCreate()

# How to run grid search
from skspark.model_selection import GridSearchCV

gs = GridSearchCV(svc, parameters)
gs.fit(iris.data, iris.target)

# How to run random search
from skspark.model_selection import RandomizedSearchCV

rs = RandomizedSearchCV(spark, svc, parameters)
rs.fit(iris.data, iris.target)

Current and upcoming functionality

  • Current
    • model_selection.RandomizedSearchCV
    • model_selection.GridSearchCV
  • Upcoming
    • model_selection.cross_val_predict
    • model_selection.cross_val_score

The docstrings are modifications of the Scikit-Learn ones and are still being converted to specifically refer to this project.

Performance optimisations

Reducing RAM usage

Coming soon

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scikit-spark-0.1.0.tar.gz (17.6 kB view details)

Uploaded Source

Built Distributions

scikit_spark-0.1.0-py3-none-any.whl (19.8 kB view details)

Uploaded Python 3

scikit_spark-0.1.0-py2-none-any.whl (30.0 kB view details)

Uploaded Python 2

File details

Details for the file scikit-spark-0.1.0.tar.gz.

File metadata

  • Download URL: scikit-spark-0.1.0.tar.gz
  • Upload date:
  • Size: 17.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.18.4 setuptools/39.2.0 requests-toolbelt/0.8.0 tqdm/4.23.4 CPython/2.7.14

File hashes

Hashes for scikit-spark-0.1.0.tar.gz
Algorithm Hash digest
SHA256 b304088b1ec700149328d287332c81dfeadf4d22359e3cb7ea1a7a0ec394a8f3
MD5 8373913b2c1228076042adaa0db7c234
BLAKE2b-256 e487c24c2828945f60e201d0a50c0633655885a558aa87b062452bd19a2dd7c5

See more details on using hashes here.

File details

Details for the file scikit_spark-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: scikit_spark-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 19.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.18.4 setuptools/39.2.0 requests-toolbelt/0.8.0 tqdm/4.23.4 CPython/2.7.14

File hashes

Hashes for scikit_spark-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 004ef746f966cae95a70a522f8c3c653bebd58b9485150349244713e413ca284
MD5 ff419d3fec1204ae2695a367be66f2eb
BLAKE2b-256 ddd72ef215c4c7d4edc567a49d5d1d66637e84a7944520fcd1eb783a01bbceac

See more details on using hashes here.

File details

Details for the file scikit_spark-0.1.0-py2-none-any.whl.

File metadata

  • Download URL: scikit_spark-0.1.0-py2-none-any.whl
  • Upload date:
  • Size: 30.0 kB
  • Tags: Python 2
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.18.4 setuptools/39.2.0 requests-toolbelt/0.8.0 tqdm/4.23.4 CPython/2.7.14

File hashes

Hashes for scikit_spark-0.1.0-py2-none-any.whl
Algorithm Hash digest
SHA256 c502f8cbf39111ca7cba0806c060fe5e2577086c97630f00cf2d110969fd401f
MD5 769032dbb6ef134346a3a611ee30267b
BLAKE2b-256 977e483ede11897615e82d7798a6525ddb401d3187a86ad3b28af76f867d3ac8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page