Skip to main content

Pure Python implementation of the Spark RDD interface.

Project description

https://raw.githubusercontent.com/svenkreiss/pysparkling/master/logo/logo-w100.png

pysparkling

A native Python implementation of Spark’s RDD interface. The primary objective to remove the dependency on the JVM and Hadoop. The focus is on having a lightweight and fast implementation for small datasets at the expense of some data resilience features and some parallel processing features. It is a drop-in replacement for PySpark’s SparkContext and RDD.

Use case: you have a pipeline that processes 100k input documents and converts them to normalized features. They are used to train a local scikit-learn classifier. The preprocessing is perfect for a full Spark task. Now, you want to use this trained classifier in an API endpoint. Assume you need the same pre-processing pipeline for a single document per API call. This does not have to be done in parallel, but there should be only a small overhead in initialization and preferably no dependency on the JVM. This is what pysparkling is for.

https://badge.fury.io/py/pysparkling.svg https://img.shields.io/pypi/dm/pysparkling.svg Join the chat at https://gitter.im/svenkreiss/pysparkling

Install

pip install pysparkling[s3,hdfs,http]

Features

  • Supports multiple URI scheme: s3://, hdfs://, http:// and file://. Specify multiple files separated by comma. Resolves * and ? wildcards.

  • Handles .gz, .zip, .lzma, .xz, .bz2, .tar, .tar.gz and .tar.bz2 compressed files. Supports reading of .7z files.

  • Parallelization via multiprocessing.Pool, concurrent.futures.ThreadPoolExecutor or any other Pool-like objects that have a map(func, iterable) method.

  • Plain pysparkling does not have any dependencies (use pip install pysparkling). Some file access methods have optional dependencies: boto for AWS S3, requests for http, hdfs for hdfs

Examples

Some demos are in the notebooks docs/demo.ipynb and docs/iris.ipynb .

Word Count

from pysparkling import Context

counts = Context().textFile(
    'README.rst'
).map(
    lambda line: ''.join(ch if ch.isalnum() else ' ' for ch in line)
).flatMap(
    lambda line: line.split(' ')
).map(
    lambda word: (word, 1)
).reduceByKey(
    lambda a, b: a + b
)
print(counts.collect())

which prints a long list of pairs of words and their counts.

API

A usual pysparkling session starts with either parallelizing a list or by reading data from a file using the methods Context.parallelize(my_list) or Context.textFile("path/to/textfile.txt"). These two methods return an RDD which can then be processed with the methods below.

RDD

API doc: http://pysparkling.trivial.io/v0.3/api.html#pysparkling.RDD

Context

A Context describes the setup. Instantiating a Context with the default arguments using Context() is the most lightweight setup. All data is just in the local thread and is never serialized or deserialized.

If you want to process the data in parallel, you can use the multiprocessing module. Given the limitations of the default pickle serializer, you can specify to serialize all methods with cloudpickle instead. For example, a common instantiation with multiprocessing looks like this:

c = Context(
    multiprocessing.Pool(4),
    serializer=cloudpickle.dumps,
    deserializer=pickle.loads,
)

This assumes that your data is serializable with pickle which is generally faster. You can also specify a custom serializer/deserializer for data.

API doc: http://pysparkling.trivial.io/v0.3/api.html#pysparkling.Context

fileio

The functionality provided by this module is used in Context.textFile() for reading and in RDD.saveAsTextFile() for writing. You can use this submodule for writing files directly with File(filename).dump(some_data), File(filename).load() and File.exists(path) to read, write and check for existance of a file. All methods transparently handle http://, s3:// and file:// locations and compression/decompression of .gz and .bz2 files.

Use environment variables AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID for auth and use file paths of the form s3://bucket_name/filename.txt.

API doc: http://pysparkling.trivial.io/v0.3/api.html#pysparkling.fileio.File

Development

Fork the Github repository, apply your changes in a feature branch and create a Pull Request. Please run nosetests to run the unit test suite including doctests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pysparkling-0.3.15.tar.gz (26.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pysparkling-0.3.15-py2.py3-none-any.whl (36.0 kB view details)

Uploaded Python 2Python 3

File details

Details for the file pysparkling-0.3.15.tar.gz.

File metadata

  • Download URL: pysparkling-0.3.15.tar.gz
  • Upload date:
  • Size: 26.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for pysparkling-0.3.15.tar.gz
Algorithm Hash digest
SHA256 a06eec674b8535efaebe45a3dea4330c5cb8188a9168a2516351cc949a82a505
MD5 051fcdef22cf7a27565df5ae40086f91
BLAKE2b-256 0c3e141ede7928283205c1e63c3b4ebef6155680df257d93379be241d58831b9

See more details on using hashes here.

File details

Details for the file pysparkling-0.3.15-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for pysparkling-0.3.15-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 3d9994cbfb370b18cbf4a9fbe232077a355e4bd93d83422419d4a97072c09585
MD5 aae0e68eab556f0009cdc041fd20fbde
BLAKE2b-256 8e48157dc51f459d7e8a6b816b8dcf6d9ac0e9d4a2cedf19249636ed0b04a5bb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page