Skip to main content

Python native implementation of the Spark RDD interface.

Project description

https://raw.githubusercontent.com/svenkreiss/pysparkling/master/logo/logo-w100.png

pysparkling

A native Python implementation of Spark’s RDD interface. The primary objective is not to have RDDs that are resilient and distributed, but to remove the dependency on the JVM and Hadoop. The focus is on having a lightweight and fast implementation for small datasets. It is a drop-in replacement for PySpark’s SparkContext and RDD.

Use case: you have a pipeline that processes 100k input documents and converts them to normalized features. They are used to train a local scikit-learn classifier. The preprocessing is perfect for a full Spark task. Now, you want to use this trained classifier in an API endpoint. You need the same pre-processing pipeline for a single document per API call. This does not have to be done in parallel, but there should be only a small overhead in initialization and preferably no dependency on the JVM. This is what pysparkling is for.

https://badge.fury.io/py/pysparkling.svg https://img.shields.io/pypi/dm/pysparkling.svg Join the chat at https://gitter.im/svenkreiss/pysparkling

Install

pip install pysparkling

Features

  • Supports multiple URI scheme: s3://, http:// and file://. Specify multiple files separated by comma. Resolves * and ? wildcards.

  • Handles .gz and .bz2 compressed files.

  • Parallelization via multiprocessing.Pool, concurrent.futures.ThreadPoolExecutor or any other Pool-like objects that have a map(func, iterable) method.

  • only dependencies: boto for AWS S3 and requests for http

The change log is in HISTORY.rst.

Examples

Word Count

from pysparkling import Context

counts = Context().textFile(
    'README.rst'
).map(
    lambda line: ''.join(ch if ch.isalnum() else ' ' for ch in line)
).flatMap(
    lambda line: line.split(' ')
).map(
    lambda word: (word, 1)
).reduceByKey(
    lambda a, b: a + b
)
print(counts.collect())

which prints a long list of pairs of words and their counts. This and more advanced examples are demoed in docs/demo.ipynb.

API

A usual pysparkling session starts with either parallelizing a list or by reading data from a file using the methods Context.parallelize(my_list) or Context.textFile("path/to/textfile.txt"). These two methods return an RDD which can then be processed with the methods below.

RDD

API doc: http://pysparkling.trivial.io/v0.2/api.html#pysparkling.RDD

Context

A Context describes the setup. Instantiating a Context with the default arguments using Context() is the most lightweight setup. All data is just in the local thread and is never serialized or deserialized.

If you want to process the data in parallel, you can use the multiprocessing module. Given the limitations of the default pickle serializer, you can specify to serialize all methods with cloudpickle instead. For example, a common instantiation with multiprocessing looks like this:

c = Context(
    multiprocessing.Pool(4),
    serializer=cloudpickle.dumps,
    deserializer=pickle.loads,
)

This assumes that your data is serializable with pickle which is generally faster. You can also specify a custom serializer/deserializer for data.

API doc: http://pysparkling.trivial.io/v0.2/api.html#pysparkling.Context

fileio

The functionality provided by this module is used in Context.textFile() for reading and in RDD.saveAsTextFile() for writing. You can use this submodule for writing files directly with File(filename).dump(some_data), File(filename).load() and File.exists(path) to read, write and check for existance of a file. All methods transparently handle http://, s3:// and file:// locations and compression/decompression of .gz and .bz2 files.

Use environment variables AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID for auth and use file paths of the form s3://bucket_name/filename.txt.

API doc: http://pysparkling.trivial.io/v0.2/api.html#pysparkling.fileio.File

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pysparkling-0.2.26.tar.gz (19.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pysparkling-0.2.26-py2.py3-none-any.whl (26.2 kB view details)

Uploaded Python 2Python 3

File details

Details for the file pysparkling-0.2.26.tar.gz.

File metadata

  • Download URL: pysparkling-0.2.26.tar.gz
  • Upload date:
  • Size: 19.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for pysparkling-0.2.26.tar.gz
Algorithm Hash digest
SHA256 017ff877970b37717286321ef311744425b030840b87a03d532bc873335386d2
MD5 9f44a3bed2e22c01e1fc6f34ffef9bc9
BLAKE2b-256 dcc4fbbcd3ca8992bb66889604dc64920e04f9eeeb06dd45e2ddb1ebac63cf8c

See more details on using hashes here.

File details

Details for the file pysparkling-0.2.26-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for pysparkling-0.2.26-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 343a04fe7e85a75836299f84ba355f77841d0052228063e279848c7fb399a48f
MD5 a53bd9f75616e9f6ed4b6e4d03e1f93b
BLAKE2b-256 4b75e0c3d718b2978b7ca836c27cd928f8b5f043a2ece737beb37b46ff9ac5d6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page