Skip to main content

Pure Python implementation of the Spark RDD interface.

Project description

https://raw.githubusercontent.com/svenkreiss/pysparkling/master/logo/logo-w100.png

pysparkling

A native Python implementation of Spark’s RDD interface. The primary objective to remove the dependency on the JVM and Hadoop. The focus is on having a lightweight and fast implementation for small datasets at the expense of some data resilience features and some parallel processing features. It is a drop-in replacement for PySpark’s SparkContext and RDD.

Use case: you have a pipeline that processes 100k input documents and converts them to normalized features. They are used to train a local scikit-learn classifier. The preprocessing is perfect for a full Spark task. Now, you want to use this trained classifier in an API endpoint. Assume you need the same pre-processing pipeline for a single document per API call. This does not have to be done in parallel, but there should be only a small overhead in initialization and preferably no dependency on the JVM. This is what pysparkling is for.

https://badge.fury.io/py/pysparkling.svg https://img.shields.io/pypi/dm/pysparkling.svg Join the chat at https://gitter.im/svenkreiss/pysparkling

Install

pip install pysparkling[s3,hdfs,http]

Features

  • Supports multiple URI scheme: s3://, hdfs://, http:// and file://. Specify multiple files separated by comma. Resolves * and ? wildcards.

  • Handles .gz and .bz2 compressed files.

  • Parallelization via multiprocessing.Pool, concurrent.futures.ThreadPoolExecutor or any other Pool-like objects that have a map(func, iterable) method.

  • Plain pysparkling does not have any dependencies (use pip install pysparkling). Some file access methods have optional dependencies: boto for AWS S3, requests for http, hdfs for hdfs

The change log is in HISTORY.rst.

Examples

Word Count

from pysparkling import Context

counts = Context().textFile(
    'README.rst'
).map(
    lambda line: ''.join(ch if ch.isalnum() else ' ' for ch in line)
).flatMap(
    lambda line: line.split(' ')
).map(
    lambda word: (word, 1)
).reduceByKey(
    lambda a, b: a + b
)
print(counts.collect())

which prints a long list of pairs of words and their counts. This and more advanced examples are demoed in docs/demo.ipynb.

API

A usual pysparkling session starts with either parallelizing a list or by reading data from a file using the methods Context.parallelize(my_list) or Context.textFile("path/to/textfile.txt"). These two methods return an RDD which can then be processed with the methods below.

RDD

API doc: http://pysparkling.trivial.io/v0.3/api.html#pysparkling.RDD

Context

A Context describes the setup. Instantiating a Context with the default arguments using Context() is the most lightweight setup. All data is just in the local thread and is never serialized or deserialized.

If you want to process the data in parallel, you can use the multiprocessing module. Given the limitations of the default pickle serializer, you can specify to serialize all methods with cloudpickle instead. For example, a common instantiation with multiprocessing looks like this:

c = Context(
    multiprocessing.Pool(4),
    serializer=cloudpickle.dumps,
    deserializer=pickle.loads,
)

This assumes that your data is serializable with pickle which is generally faster. You can also specify a custom serializer/deserializer for data.

API doc: http://pysparkling.trivial.io/v0.3/api.html#pysparkling.Context

fileio

The functionality provided by this module is used in Context.textFile() for reading and in RDD.saveAsTextFile() for writing. You can use this submodule for writing files directly with File(filename).dump(some_data), File(filename).load() and File.exists(path) to read, write and check for existance of a file. All methods transparently handle http://, s3:// and file:// locations and compression/decompression of .gz and .bz2 files.

Use environment variables AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID for auth and use file paths of the form s3://bucket_name/filename.txt.

API doc: http://pysparkling.trivial.io/v0.3/api.html#pysparkling.fileio.File

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pysparkling-0.3.4.tar.gz (24.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pysparkling-0.3.4-py2.py3-none-any.whl (32.0 kB view details)

Uploaded Python 2Python 3

File details

Details for the file pysparkling-0.3.4.tar.gz.

File metadata

  • Download URL: pysparkling-0.3.4.tar.gz
  • Upload date:
  • Size: 24.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for pysparkling-0.3.4.tar.gz
Algorithm Hash digest
SHA256 b8e219e51dc9a2fa30183eacea201809df8760074d80f81cc0d8c1fd7dc4eb00
MD5 8605c600568549cbc95e99113aceee0f
BLAKE2b-256 73ace2ec0f020d351dab407a66562cf8f6e25fb9b9b920501b9061efc191fafa

See more details on using hashes here.

File details

Details for the file pysparkling-0.3.4-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for pysparkling-0.3.4-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 7a0f5c5511ebe24bebe1cc7fb8d62487c421e7d115480dcb2ac0469f752ff08c
MD5 8d30d91545e34bfce09a3aba08da9392
BLAKE2b-256 062b3c37ff19c74a56679765e942a32174b765a3683b8554b2749aaab9edc03e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page