Pure Python implementation of the Spark RDD interface.
A native Python implementation of Spark’s RDD interface. The primary objective to remove the dependency on the JVM and Hadoop. The focus is on having a lightweight and fast implementation for small datasets at the expense of some data resilience features and some parallel processing features. It is a drop-in replacement for PySpark’s SparkContext and RDD.
Use case: you have a pipeline that processes 100k input documents and converts them to normalized features. They are used to train a local scikit-learn classifier. The preprocessing is perfect for a full Spark task. Now, you want to use this trained classifier in an API endpoint. Assume you need the same pre-processing pipeline for a single document per API call. This does not have to be done in parallel, but there should be only a small overhead in initialization and preferably no dependency on the JVM. This is what pysparkling is for.
pip install pysparkling[s3,hdfs,http]
- Supports multiple URI scheme: s3://, hdfs://, http:// and file://. Specify multiple files separated by comma. Resolves * and ? wildcards.
- Handles .gz and .bz2 compressed files.
- Parallelization via multiprocessing.Pool, concurrent.futures.ThreadPoolExecutor or any other Pool-like objects that have a map(func, iterable) method.
- Plain pysparkling does not have any dependencies (use pip install pysparkling). Some file access methods have optional dependencies: boto for AWS S3, requests for http, hdfs for hdfs
The change log is in HISTORY.rst.
from pysparkling import Context counts = Context().textFile( 'README.rst' ).map( lambda line: ''.join(ch if ch.isalnum() else ' ' for ch in line) ).flatMap( lambda line: line.split(' ') ).map( lambda word: (word, 1) ).reduceByKey( lambda a, b: a + b ) print(counts.collect())
which prints a long list of pairs of words and their counts. This and more advanced examples are demoed in docs/demo.ipynb.
A usual pysparkling session starts with either parallelizing a list or by reading data from a file using the methods Context.parallelize(my_list) or Context.textFile("path/to/textfile.txt"). These two methods return an RDD which can then be processed with the methods below.
A Context describes the setup. Instantiating a Context with the default arguments using Context() is the most lightweight setup. All data is just in the local thread and is never serialized or deserialized.
If you want to process the data in parallel, you can use the multiprocessing module. Given the limitations of the default pickle serializer, you can specify to serialize all methods with cloudpickle instead. For example, a common instantiation with multiprocessing looks like this:
c = Context( multiprocessing.Pool(4), serializer=cloudpickle.dumps, deserializer=pickle.loads, )
This assumes that your data is serializable with pickle which is generally faster. You can also specify a custom serializer/deserializer for data.
The functionality provided by this module is used in Context.textFile() for reading and in RDD.saveAsTextFile() for writing. You can use this submodule for writing files directly with File(filename).dump(some_data), File(filename).load() and File.exists(path) to read, write and check for existance of a file. All methods transparently handle http://, s3:// and file:// locations and compression/decompression of .gz and .bz2 files.
Use environment variables AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID for auth and use file paths of the form s3://bucket_name/filename.txt.
Release history Release notifications
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size & hash SHA256 hash help||File type||Python version||Upload date|
|pysparkling-0.3.3-py2.py3-none-any.whl (32.0 kB) Copy SHA256 hash SHA256||Wheel||2.7|
|pysparkling-0.3.3.tar.gz (24.1 kB) Copy SHA256 hash SHA256||Source||None|