Skip to main content

Python native implementation of the Spark RDD interface.

Project description

https://raw.githubusercontent.com/svenkreiss/pysparkling/master/logo/logo-w100.png

pysparkling

A native Python implementation of Spark’s RDD interface. The primary objective is not to have RDDs that are resilient and distributed, but to remove the dependency on the JVM and Hadoop. The focus is on having a lightweight and fast implementation for small datasets. It is a drop-in replacement for PySpark’s SparkContext and RDD.

Use case: you have a pipeline that processes 100k input documents and converts them to normalized features. They are used to train a local scikit-learn classifier. The preprocessing is perfect for a full Spark task. Now, you want to use this trained classifier in an API endpoint. You need the same pre-processing pipeline for a single document per API call. This does not have to be done in parallel, but there should be only a small overhead in initialization and preferably no dependency on the JVM. This is what pysparkling is for.

https://badge.fury.io/py/pysparkling.svg https://img.shields.io/pypi/dm/pysparkling.svg Join the chat at https://gitter.im/svenkreiss/pysparkling

Install

pip install pysparkling

Features

  • Supports multiple URI scheme: s3://, http:// and file://. Specify multiple files separated by comma. Resolves * and ? wildcards.

  • Handles .gz and .bz2 compressed files.

  • Parallelization via multiprocessing.Pool, concurrent.futures.ThreadPoolExecutor or any other Pool-like objects that have a map(func, iterable) method.

  • only dependencies: boto for AWS S3 and requests for http

The change log is in HISTORY.rst.

Examples

Word Count

from pysparkling import Context

counts = Context().textFile(
    'README.rst'
).map(
    lambda line: ''.join(ch if ch.isalnum() else ' ' for ch in line)
).flatMap(
    lambda line: line.split(' ')
).map(
    lambda word: (word, 1)
).reduceByKey(
    lambda a, b: a + b
)
print(counts.collect())

which prints a long list of pairs of words and their counts. This and more advanced examples are demoed in docs/demo.ipynb.

API

A usual pysparkling session starts with either parallelizing a list or by reading data from a file using the methods Context.parallelize(my_list) or Context.textFile("path/to/textfile.txt"). These two methods return an RDD which can then be processed with the methods below.

RDD

API doc: http://pysparkling.trivial.io/v0.2/api.html#pysparkling.RDD

Context

A Context describes the setup. Instantiating a Context with the default arguments using Context() is the most lightweight setup. All data is just in the local thread and is never serialized or deserialized.

If you want to process the data in parallel, you can use the multiprocessing module. Given the limitations of the default pickle serializer, you can specify to serialize all methods with cloudpickle instead. For example, a common instantiation with multiprocessing looks like this:

c = Context(
    multiprocessing.Pool(4),
    serializer=cloudpickle.dumps,
    deserializer=pickle.loads,
)

This assumes that your data is serializable with pickle which is generally faster. You can also specify a custom serializer/deserializer for data.

API doc: http://pysparkling.trivial.io/v0.2/api.html#pysparkling.Context

fileio

The functionality provided by this module is used in Context.textFile() for reading and in RDD.saveAsTextFile() for writing. You can use this submodule for writing files directly with File(filename).dump(some_data), File(filename).load() and File.exists(path) to read, write and check for existance of a file. All methods transparently handle http://, s3:// and file:// locations and compression/decompression of .gz and .bz2 files.

Use environment variables AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID for auth and use file paths of the form s3://bucket_name/filename.txt.

API doc: http://pysparkling.trivial.io/v0.2/api.html#pysparkling.fileio.File

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pysparkling-0.2.29.tar.gz (20.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pysparkling-0.2.29-py2.py3-none-any.whl (27.0 kB view details)

Uploaded Python 2Python 3

File details

Details for the file pysparkling-0.2.29.tar.gz.

File metadata

  • Download URL: pysparkling-0.2.29.tar.gz
  • Upload date:
  • Size: 20.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for pysparkling-0.2.29.tar.gz
Algorithm Hash digest
SHA256 d504a5320646f52da2855965122dd350f780dc12e44aae258b199ec19149c31d
MD5 91909ad6e6ad4fac84512aaeaf886777
BLAKE2b-256 aa8474465e9dcab0021017e258d5d64e9e33fda7711717162b23cf01c90913cb

See more details on using hashes here.

File details

Details for the file pysparkling-0.2.29-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for pysparkling-0.2.29-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 4c463a99ab0616f0c4d224cd0fb5a2d466f9ed2bf77d3c71958c7c7089c2060b
MD5 97383de76a2432b66bcf87a76a00738e
BLAKE2b-256 827dd1d2fc65e016b2b91b213df313f39d95969f5675b7d735687e17e0493fa3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page