PySpark utility functions
Project description
pyspark-util
A set of pyspark utility functions.
import pyspark_util as psu
data = [(1, 2, 3)]
columns = ['a', 'b', 'c']
df = spark.createDataFrame(data, columns)
prefixed = psu.prefix_columns(df, 'x')
prefixed.show()
# output:
+---+---+---+
|x_a|x_b|x_c|
+---+---+---+
| 1| 2| 3|
+---+---+---+
Development
Setup
docker-compose build
docker-compose up -d
Lint
docker exec psu-cnt ./tools/lint.sh
Test
docker exec psu-cnt ./tools/test.sh
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for pyspark_util-0.1.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4b310d283bd18e4d3e20cf9cc2e2f0453e98e57040613ede1082cea1e6c7397b |
|
MD5 | 051d3c13463abaa196e4ba0030eb478e |
|
BLAKE2b-256 | 830c0acf6b0471dfee4a4c5969d87b462fce981bbc9c43a799df987c16b47cf8 |